Test Report: KVM_Linux_crio 17223

                    
                      f9ecce707d93fa4241f904962674ddf295a62997:2023-09-11:30961
                    
                

Test fail (27/288)

Order failed test Duration
25 TestAddons/parallel/Ingress 155.99
26 TestAddons/parallel/InspektorGadget 7.77
36 TestAddons/StoppedEnableDisable 155.32
152 TestIngressAddonLegacy/serial/ValidateIngressAddons 166.73
200 TestMultiNode/serial/PingHostFrom2Pods 3.34
206 TestMultiNode/serial/RestartKeepsNodes 690.24
208 TestMultiNode/serial/StopMultiNode 143.22
215 TestPreload 188.27
221 TestRunningBinaryUpgrade 172.84
226 TestStoppedBinaryUpgrade/Upgrade 316.1
252 TestPause/serial/SecondStartNoReconfiguration 78.09
268 TestStartStop/group/no-preload/serial/Stop 140.21
270 TestStartStop/group/embed-certs/serial/Stop 140.25
273 TestStartStop/group/old-k8s-version/serial/Stop 140.01
277 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.38
279 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.36
280 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.38
282 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 12.38
285 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
287 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.35
288 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 543.4
289 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 543.34
290 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 543.29
291 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 537.41
292 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 375.11
293 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 542.55
294 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 221.92
x
+
TestAddons/parallel/Ingress (155.99s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:183: (dbg) Run:  kubectl --context addons-554886 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:208: (dbg) Run:  kubectl --context addons-554886 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context addons-554886 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [334139c1-49e6-47ff-b89b-d4b0bbe9e4dc] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [334139c1-49e6-47ff-b89b-d4b0bbe9e4dc] Running
addons_test.go:226: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 12.016276159s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p addons-554886 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-554886 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.457737192s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context addons-554886 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p addons-554886 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.217
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p addons-554886 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p addons-554886 addons disable ingress-dns --alsologtostderr -v=1: (1.444069905s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p addons-554886 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p addons-554886 addons disable ingress --alsologtostderr -v=1: (7.892626867s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-554886 -n addons-554886
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-554886 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-554886 logs -n 25: (1.348617589s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-461050 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC |                     |
	|         | -p download-only-461050        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-461050 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC |                     |
	|         | -p download-only-461050        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC | 11 Sep 23 10:56 UTC |
	| delete  | -p download-only-461050        | download-only-461050 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC | 11 Sep 23 10:56 UTC |
	| delete  | -p download-only-461050        | download-only-461050 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC | 11 Sep 23 10:56 UTC |
	| start   | --download-only -p             | binary-mirror-417783 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC |                     |
	|         | binary-mirror-417783           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34313         |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-417783        | binary-mirror-417783 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC | 11 Sep 23 10:56 UTC |
	| start   | -p addons-554886               | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC | 11 Sep 23 10:59 UTC |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	|         | --addons=helm-tiller           |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC | 11 Sep 23 10:59 UTC |
	|         | -p addons-554886               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC | 11 Sep 23 10:59 UTC |
	|         | addons-554886                  |                      |         |         |                     |                     |
	| addons  | addons-554886 addons           | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC | 11 Sep 23 10:59 UTC |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ip      | addons-554886 ip               | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC | 11 Sep 23 10:59 UTC |
	| addons  | addons-554886 addons disable   | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC | 11 Sep 23 10:59 UTC |
	|         | registry --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-554886 addons disable   | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC | 11 Sep 23 10:59 UTC |
	|         | helm-tiller --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC |                     |
	|         | addons-554886                  |                      |         |         |                     |                     |
	| ssh     | addons-554886 ssh curl -s      | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:    |                      |         |         |                     |                     |
	|         | nginx.example.com'             |                      |         |         |                     |                     |
	| addons  | addons-554886 addons           | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 11:00 UTC | 11 Sep 23 11:00 UTC |
	|         | disable csi-hostpath-driver    |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | addons-554886 addons           | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 11:00 UTC | 11 Sep 23 11:00 UTC |
	|         | disable volumesnapshots        |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ip      | addons-554886 ip               | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 11:01 UTC | 11 Sep 23 11:01 UTC |
	| addons  | addons-554886 addons disable   | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 11:01 UTC | 11 Sep 23 11:01 UTC |
	|         | ingress-dns --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-554886 addons disable   | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 11:01 UTC | 11 Sep 23 11:02 UTC |
	|         | ingress --alsologtostderr -v=1 |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 10:56:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 10:56:49.579022 2222784 out.go:296] Setting OutFile to fd 1 ...
	I0911 10:56:49.579186 2222784 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 10:56:49.579196 2222784 out.go:309] Setting ErrFile to fd 2...
	I0911 10:56:49.579203 2222784 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 10:56:49.579424 2222784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 10:56:49.580088 2222784 out.go:303] Setting JSON to false
	I0911 10:56:49.581079 2222784 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":232761,"bootTime":1694197049,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 10:56:49.581152 2222784 start.go:138] virtualization: kvm guest
	I0911 10:56:49.584066 2222784 out.go:177] * [addons-554886] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 10:56:49.585986 2222784 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 10:56:49.585931 2222784 notify.go:220] Checking for updates...
	I0911 10:56:49.587749 2222784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 10:56:49.589559 2222784 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 10:56:49.591322 2222784 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 10:56:49.593117 2222784 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 10:56:49.595380 2222784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 10:56:49.597333 2222784 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 10:56:49.632682 2222784 out.go:177] * Using the kvm2 driver based on user configuration
	I0911 10:56:49.634208 2222784 start.go:298] selected driver: kvm2
	I0911 10:56:49.634228 2222784 start.go:902] validating driver "kvm2" against <nil>
	I0911 10:56:49.634253 2222784 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 10:56:49.635286 2222784 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 10:56:49.635384 2222784 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 10:56:49.651187 2222784 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 10:56:49.651247 2222784 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 10:56:49.651482 2222784 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 10:56:49.651518 2222784 cni.go:84] Creating CNI manager for ""
	I0911 10:56:49.651530 2222784 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 10:56:49.651542 2222784 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 10:56:49.651550 2222784 start_flags.go:321] config:
	{Name:addons-554886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-554886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 10:56:49.651679 2222784 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 10:56:49.653639 2222784 out.go:177] * Starting control plane node addons-554886 in cluster addons-554886
	I0911 10:56:49.655305 2222784 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 10:56:49.655347 2222784 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 10:56:49.655360 2222784 cache.go:57] Caching tarball of preloaded images
	I0911 10:56:49.655449 2222784 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 10:56:49.655463 2222784 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 10:56:49.655843 2222784 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/config.json ...
	I0911 10:56:49.655874 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/config.json: {Name:mkb9d47aea5b20199ee73d14d304ac7e99ccbda8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:56:49.656026 2222784 start.go:365] acquiring machines lock for addons-554886: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 10:56:49.656074 2222784 start.go:369] acquired machines lock for "addons-554886" in 31.701µs
	I0911 10:56:49.656115 2222784 start.go:93] Provisioning new machine with config: &{Name:addons-554886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:addons-554886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 10:56:49.656210 2222784 start.go:125] createHost starting for "" (driver="kvm2")
	I0911 10:56:49.658315 2222784 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0911 10:56:49.658480 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:56:49.658542 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:56:49.673999 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45385
	I0911 10:56:49.674546 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:56:49.675244 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:56:49.675271 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:56:49.675636 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:56:49.675864 2222784 main.go:141] libmachine: (addons-554886) Calling .GetMachineName
	I0911 10:56:49.676055 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:56:49.676240 2222784 start.go:159] libmachine.API.Create for "addons-554886" (driver="kvm2")
	I0911 10:56:49.676272 2222784 client.go:168] LocalClient.Create starting
	I0911 10:56:49.676357 2222784 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem
	I0911 10:56:49.810301 2222784 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem
	I0911 10:56:49.916295 2222784 main.go:141] libmachine: Running pre-create checks...
	I0911 10:56:49.916322 2222784 main.go:141] libmachine: (addons-554886) Calling .PreCreateCheck
	I0911 10:56:49.916981 2222784 main.go:141] libmachine: (addons-554886) Calling .GetConfigRaw
	I0911 10:56:49.917538 2222784 main.go:141] libmachine: Creating machine...
	I0911 10:56:49.917560 2222784 main.go:141] libmachine: (addons-554886) Calling .Create
	I0911 10:56:49.917795 2222784 main.go:141] libmachine: (addons-554886) Creating KVM machine...
	I0911 10:56:49.919242 2222784 main.go:141] libmachine: (addons-554886) DBG | found existing default KVM network
	I0911 10:56:49.920187 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:49.920013 2222816 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000298b0}
	I0911 10:56:49.926251 2222784 main.go:141] libmachine: (addons-554886) DBG | trying to create private KVM network mk-addons-554886 192.168.39.0/24...
	I0911 10:56:50.003766 2222784 main.go:141] libmachine: (addons-554886) DBG | private KVM network mk-addons-554886 192.168.39.0/24 created
	I0911 10:56:50.003806 2222784 main.go:141] libmachine: (addons-554886) Setting up store path in /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886 ...
	I0911 10:56:50.003889 2222784 main.go:141] libmachine: (addons-554886) Building disk image from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0911 10:56:50.003935 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:50.003761 2222816 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 10:56:50.003973 2222784 main.go:141] libmachine: (addons-554886) Downloading /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0911 10:56:50.260017 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:50.259871 2222816 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa...
	I0911 10:56:50.381805 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:50.381599 2222816 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/addons-554886.rawdisk...
	I0911 10:56:50.381849 2222784 main.go:141] libmachine: (addons-554886) DBG | Writing magic tar header
	I0911 10:56:50.381866 2222784 main.go:141] libmachine: (addons-554886) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886 (perms=drwx------)
	I0911 10:56:50.381884 2222784 main.go:141] libmachine: (addons-554886) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines (perms=drwxr-xr-x)
	I0911 10:56:50.381893 2222784 main.go:141] libmachine: (addons-554886) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube (perms=drwxr-xr-x)
	I0911 10:56:50.381911 2222784 main.go:141] libmachine: (addons-554886) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273 (perms=drwxrwxr-x)
	I0911 10:56:50.381923 2222784 main.go:141] libmachine: (addons-554886) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0911 10:56:50.381938 2222784 main.go:141] libmachine: (addons-554886) DBG | Writing SSH key tar header
	I0911 10:56:50.381951 2222784 main.go:141] libmachine: (addons-554886) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0911 10:56:50.381970 2222784 main.go:141] libmachine: (addons-554886) Creating domain...
	I0911 10:56:50.381991 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:50.381729 2222816 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886 ...
	I0911 10:56:50.382011 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886
	I0911 10:56:50.382031 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines
	I0911 10:56:50.382053 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 10:56:50.382071 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273
	I0911 10:56:50.382081 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0911 10:56:50.382095 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home/jenkins
	I0911 10:56:50.382106 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home
	I0911 10:56:50.382115 2222784 main.go:141] libmachine: (addons-554886) DBG | Skipping /home - not owner
	I0911 10:56:50.383470 2222784 main.go:141] libmachine: (addons-554886) define libvirt domain using xml: 
	I0911 10:56:50.383500 2222784 main.go:141] libmachine: (addons-554886) <domain type='kvm'>
	I0911 10:56:50.383508 2222784 main.go:141] libmachine: (addons-554886)   <name>addons-554886</name>
	I0911 10:56:50.383513 2222784 main.go:141] libmachine: (addons-554886)   <memory unit='MiB'>4000</memory>
	I0911 10:56:50.383520 2222784 main.go:141] libmachine: (addons-554886)   <vcpu>2</vcpu>
	I0911 10:56:50.383525 2222784 main.go:141] libmachine: (addons-554886)   <features>
	I0911 10:56:50.383531 2222784 main.go:141] libmachine: (addons-554886)     <acpi/>
	I0911 10:56:50.383535 2222784 main.go:141] libmachine: (addons-554886)     <apic/>
	I0911 10:56:50.383541 2222784 main.go:141] libmachine: (addons-554886)     <pae/>
	I0911 10:56:50.383549 2222784 main.go:141] libmachine: (addons-554886)     
	I0911 10:56:50.383555 2222784 main.go:141] libmachine: (addons-554886)   </features>
	I0911 10:56:50.383563 2222784 main.go:141] libmachine: (addons-554886)   <cpu mode='host-passthrough'>
	I0911 10:56:50.383584 2222784 main.go:141] libmachine: (addons-554886)   
	I0911 10:56:50.383595 2222784 main.go:141] libmachine: (addons-554886)   </cpu>
	I0911 10:56:50.383636 2222784 main.go:141] libmachine: (addons-554886)   <os>
	I0911 10:56:50.383691 2222784 main.go:141] libmachine: (addons-554886)     <type>hvm</type>
	I0911 10:56:50.383707 2222784 main.go:141] libmachine: (addons-554886)     <boot dev='cdrom'/>
	I0911 10:56:50.383713 2222784 main.go:141] libmachine: (addons-554886)     <boot dev='hd'/>
	I0911 10:56:50.383719 2222784 main.go:141] libmachine: (addons-554886)     <bootmenu enable='no'/>
	I0911 10:56:50.383729 2222784 main.go:141] libmachine: (addons-554886)   </os>
	I0911 10:56:50.383737 2222784 main.go:141] libmachine: (addons-554886)   <devices>
	I0911 10:56:50.383745 2222784 main.go:141] libmachine: (addons-554886)     <disk type='file' device='cdrom'>
	I0911 10:56:50.383787 2222784 main.go:141] libmachine: (addons-554886)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/boot2docker.iso'/>
	I0911 10:56:50.383819 2222784 main.go:141] libmachine: (addons-554886)       <target dev='hdc' bus='scsi'/>
	I0911 10:56:50.383835 2222784 main.go:141] libmachine: (addons-554886)       <readonly/>
	I0911 10:56:50.383848 2222784 main.go:141] libmachine: (addons-554886)     </disk>
	I0911 10:56:50.383874 2222784 main.go:141] libmachine: (addons-554886)     <disk type='file' device='disk'>
	I0911 10:56:50.383889 2222784 main.go:141] libmachine: (addons-554886)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0911 10:56:50.383914 2222784 main.go:141] libmachine: (addons-554886)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/addons-554886.rawdisk'/>
	I0911 10:56:50.383932 2222784 main.go:141] libmachine: (addons-554886)       <target dev='hda' bus='virtio'/>
	I0911 10:56:50.383945 2222784 main.go:141] libmachine: (addons-554886)     </disk>
	I0911 10:56:50.383955 2222784 main.go:141] libmachine: (addons-554886)     <interface type='network'>
	I0911 10:56:50.383969 2222784 main.go:141] libmachine: (addons-554886)       <source network='mk-addons-554886'/>
	I0911 10:56:50.383980 2222784 main.go:141] libmachine: (addons-554886)       <model type='virtio'/>
	I0911 10:56:50.383993 2222784 main.go:141] libmachine: (addons-554886)     </interface>
	I0911 10:56:50.384009 2222784 main.go:141] libmachine: (addons-554886)     <interface type='network'>
	I0911 10:56:50.384023 2222784 main.go:141] libmachine: (addons-554886)       <source network='default'/>
	I0911 10:56:50.384035 2222784 main.go:141] libmachine: (addons-554886)       <model type='virtio'/>
	I0911 10:56:50.384045 2222784 main.go:141] libmachine: (addons-554886)     </interface>
	I0911 10:56:50.384057 2222784 main.go:141] libmachine: (addons-554886)     <serial type='pty'>
	I0911 10:56:50.384068 2222784 main.go:141] libmachine: (addons-554886)       <target port='0'/>
	I0911 10:56:50.384084 2222784 main.go:141] libmachine: (addons-554886)     </serial>
	I0911 10:56:50.384096 2222784 main.go:141] libmachine: (addons-554886)     <console type='pty'>
	I0911 10:56:50.384107 2222784 main.go:141] libmachine: (addons-554886)       <target type='serial' port='0'/>
	I0911 10:56:50.384119 2222784 main.go:141] libmachine: (addons-554886)     </console>
	I0911 10:56:50.384130 2222784 main.go:141] libmachine: (addons-554886)     <rng model='virtio'>
	I0911 10:56:50.384142 2222784 main.go:141] libmachine: (addons-554886)       <backend model='random'>/dev/random</backend>
	I0911 10:56:50.384163 2222784 main.go:141] libmachine: (addons-554886)     </rng>
	I0911 10:56:50.384175 2222784 main.go:141] libmachine: (addons-554886)     
	I0911 10:56:50.384186 2222784 main.go:141] libmachine: (addons-554886)     
	I0911 10:56:50.384199 2222784 main.go:141] libmachine: (addons-554886)   </devices>
	I0911 10:56:50.384211 2222784 main.go:141] libmachine: (addons-554886) </domain>
	I0911 10:56:50.384231 2222784 main.go:141] libmachine: (addons-554886) 
	I0911 10:56:50.389287 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:f6:ba:d8 in network default
	I0911 10:56:50.390121 2222784 main.go:141] libmachine: (addons-554886) Ensuring networks are active...
	I0911 10:56:50.390163 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:50.390924 2222784 main.go:141] libmachine: (addons-554886) Ensuring network default is active
	I0911 10:56:50.391557 2222784 main.go:141] libmachine: (addons-554886) Ensuring network mk-addons-554886 is active
	I0911 10:56:50.392078 2222784 main.go:141] libmachine: (addons-554886) Getting domain xml...
	I0911 10:56:50.392870 2222784 main.go:141] libmachine: (addons-554886) Creating domain...
	I0911 10:56:51.638833 2222784 main.go:141] libmachine: (addons-554886) Waiting to get IP...
	I0911 10:56:51.639727 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:51.640136 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:51.640205 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:51.640115 2222816 retry.go:31] will retry after 221.869338ms: waiting for machine to come up
	I0911 10:56:51.863778 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:51.864281 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:51.864313 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:51.864225 2222816 retry.go:31] will retry after 382.483832ms: waiting for machine to come up
	I0911 10:56:52.249137 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:52.249544 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:52.249568 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:52.249495 2222816 retry.go:31] will retry after 373.419457ms: waiting for machine to come up
	I0911 10:56:52.624135 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:52.624575 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:52.624605 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:52.624532 2222816 retry.go:31] will retry after 502.42247ms: waiting for machine to come up
	I0911 10:56:53.128372 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:53.128741 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:53.128769 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:53.128687 2222816 retry.go:31] will retry after 703.115816ms: waiting for machine to come up
	I0911 10:56:53.833765 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:53.834201 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:53.834234 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:53.834133 2222816 retry.go:31] will retry after 810.829781ms: waiting for machine to come up
	I0911 10:56:54.647009 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:54.647418 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:54.647450 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:54.647364 2222816 retry.go:31] will retry after 786.103123ms: waiting for machine to come up
	I0911 10:56:55.435063 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:55.435558 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:55.435586 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:55.435476 2222816 retry.go:31] will retry after 1.216968943s: waiting for machine to come up
	I0911 10:56:56.654297 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:56.654795 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:56.654826 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:56.654742 2222816 retry.go:31] will retry after 1.645693064s: waiting for machine to come up
	I0911 10:56:58.302914 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:58.303343 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:58.303368 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:58.303289 2222816 retry.go:31] will retry after 1.403118165s: waiting for machine to come up
	I0911 10:56:59.709826 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:59.710299 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:59.710350 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:59.710250 2222816 retry.go:31] will retry after 1.793989775s: waiting for machine to come up
	I0911 10:57:01.506125 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:01.506628 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:57:01.506695 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:57:01.506571 2222816 retry.go:31] will retry after 2.373189625s: waiting for machine to come up
	I0911 10:57:03.883358 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:03.883770 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:57:03.883806 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:57:03.883719 2222816 retry.go:31] will retry after 4.354927218s: waiting for machine to come up
	I0911 10:57:08.242958 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:08.243439 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:57:08.243464 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:57:08.243420 2222816 retry.go:31] will retry after 3.80832799s: waiting for machine to come up
	I0911 10:57:12.055397 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.055843 2222784 main.go:141] libmachine: (addons-554886) Found IP for machine: 192.168.39.217
	I0911 10:57:12.055874 2222784 main.go:141] libmachine: (addons-554886) Reserving static IP address...
	I0911 10:57:12.055889 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has current primary IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.056292 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find host DHCP lease matching {name: "addons-554886", mac: "52:54:00:c7:87:82", ip: "192.168.39.217"} in network mk-addons-554886
	I0911 10:57:12.151321 2222784 main.go:141] libmachine: (addons-554886) DBG | Getting to WaitForSSH function...
	I0911 10:57:12.151359 2222784 main.go:141] libmachine: (addons-554886) Reserved static IP address: 192.168.39.217
	I0911 10:57:12.151374 2222784 main.go:141] libmachine: (addons-554886) Waiting for SSH to be available...
	I0911 10:57:12.154477 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.155074 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.155110 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.155252 2222784 main.go:141] libmachine: (addons-554886) DBG | Using SSH client type: external
	I0911 10:57:12.155273 2222784 main.go:141] libmachine: (addons-554886) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa (-rw-------)
	I0911 10:57:12.155320 2222784 main.go:141] libmachine: (addons-554886) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 10:57:12.155340 2222784 main.go:141] libmachine: (addons-554886) DBG | About to run SSH command:
	I0911 10:57:12.155349 2222784 main.go:141] libmachine: (addons-554886) DBG | exit 0
	I0911 10:57:12.248936 2222784 main.go:141] libmachine: (addons-554886) DBG | SSH cmd err, output: <nil>: 
	I0911 10:57:12.249211 2222784 main.go:141] libmachine: (addons-554886) KVM machine creation complete!
	I0911 10:57:12.249492 2222784 main.go:141] libmachine: (addons-554886) Calling .GetConfigRaw
	I0911 10:57:12.250107 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:12.250333 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:12.250585 2222784 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0911 10:57:12.250619 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:12.252102 2222784 main.go:141] libmachine: Detecting operating system of created instance...
	I0911 10:57:12.252122 2222784 main.go:141] libmachine: Waiting for SSH to be available...
	I0911 10:57:12.252129 2222784 main.go:141] libmachine: Getting to WaitForSSH function...
	I0911 10:57:12.252136 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:12.254964 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.255611 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.255675 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.255724 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:12.255932 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.256124 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.256272 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:12.256470 2222784 main.go:141] libmachine: Using SSH client type: native
	I0911 10:57:12.257608 2222784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0911 10:57:12.257637 2222784 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0911 10:57:12.384242 2222784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 10:57:12.384286 2222784 main.go:141] libmachine: Detecting the provisioner...
	I0911 10:57:12.384298 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:12.387300 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.387707 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.387737 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.387954 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:12.388234 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.388406 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.388539 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:12.388675 2222784 main.go:141] libmachine: Using SSH client type: native
	I0911 10:57:12.389171 2222784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0911 10:57:12.389191 2222784 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0911 10:57:12.518389 2222784 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0911 10:57:12.518497 2222784 main.go:141] libmachine: found compatible host: buildroot
	I0911 10:57:12.518512 2222784 main.go:141] libmachine: Provisioning with buildroot...
	I0911 10:57:12.518524 2222784 main.go:141] libmachine: (addons-554886) Calling .GetMachineName
	I0911 10:57:12.518863 2222784 buildroot.go:166] provisioning hostname "addons-554886"
	I0911 10:57:12.518892 2222784 main.go:141] libmachine: (addons-554886) Calling .GetMachineName
	I0911 10:57:12.519134 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:12.521915 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.522257 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.522288 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.522421 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:12.522736 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.522945 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.523115 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:12.523340 2222784 main.go:141] libmachine: Using SSH client type: native
	I0911 10:57:12.523993 2222784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0911 10:57:12.524013 2222784 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-554886 && echo "addons-554886" | sudo tee /etc/hostname
	I0911 10:57:12.661186 2222784 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-554886
	
	I0911 10:57:12.661234 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:12.664403 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.664780 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.664835 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.665008 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:12.665233 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.665403 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.665589 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:12.665713 2222784 main.go:141] libmachine: Using SSH client type: native
	I0911 10:57:12.666143 2222784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0911 10:57:12.666172 2222784 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-554886' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-554886/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-554886' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 10:57:12.802281 2222784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 10:57:12.802318 2222784 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 10:57:12.802372 2222784 buildroot.go:174] setting up certificates
	I0911 10:57:12.802384 2222784 provision.go:83] configureAuth start
	I0911 10:57:12.802397 2222784 main.go:141] libmachine: (addons-554886) Calling .GetMachineName
	I0911 10:57:12.802720 2222784 main.go:141] libmachine: (addons-554886) Calling .GetIP
	I0911 10:57:12.805470 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.805953 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.805989 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.806144 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:12.808433 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.808711 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.808750 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.808916 2222784 provision.go:138] copyHostCerts
	I0911 10:57:12.809022 2222784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 10:57:12.809197 2222784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 10:57:12.809314 2222784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 10:57:12.809386 2222784 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.addons-554886 san=[192.168.39.217 192.168.39.217 localhost 127.0.0.1 minikube addons-554886]
	I0911 10:57:12.973496 2222784 provision.go:172] copyRemoteCerts
	I0911 10:57:12.973571 2222784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 10:57:12.973647 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:12.976547 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.976953 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.976992 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.977171 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:12.977453 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.977670 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:12.977913 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:13.070339 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 10:57:13.094670 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0911 10:57:13.122326 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 10:57:13.147305 2222784 provision.go:86] duration metric: configureAuth took 344.903278ms
	I0911 10:57:13.147342 2222784 buildroot.go:189] setting minikube options for container-runtime
	I0911 10:57:13.147571 2222784 config.go:182] Loaded profile config "addons-554886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 10:57:13.147654 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:13.151008 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.151477 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.151513 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.151708 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:13.151906 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.152095 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.152202 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:13.152378 2222784 main.go:141] libmachine: Using SSH client type: native
	I0911 10:57:13.152883 2222784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0911 10:57:13.152905 2222784 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 10:57:13.484319 2222784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 10:57:13.484350 2222784 main.go:141] libmachine: Checking connection to Docker...
	I0911 10:57:13.484371 2222784 main.go:141] libmachine: (addons-554886) Calling .GetURL
	I0911 10:57:13.485510 2222784 main.go:141] libmachine: (addons-554886) DBG | Using libvirt version 6000000
	I0911 10:57:13.488021 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.488395 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.488432 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.488586 2222784 main.go:141] libmachine: Docker is up and running!
	I0911 10:57:13.488606 2222784 main.go:141] libmachine: Reticulating splines...
	I0911 10:57:13.488616 2222784 client.go:171] LocalClient.Create took 23.812331343s
	I0911 10:57:13.488644 2222784 start.go:167] duration metric: libmachine.API.Create for "addons-554886" took 23.812405041s
	I0911 10:57:13.488672 2222784 start.go:300] post-start starting for "addons-554886" (driver="kvm2")
	I0911 10:57:13.488688 2222784 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 10:57:13.488725 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:13.489001 2222784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 10:57:13.489033 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:13.491388 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.491840 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.491865 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.492016 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:13.492215 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.492417 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:13.492562 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:13.589212 2222784 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 10:57:13.593876 2222784 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 10:57:13.593905 2222784 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 10:57:13.593999 2222784 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 10:57:13.594031 2222784 start.go:303] post-start completed in 105.347267ms
	I0911 10:57:13.594074 2222784 main.go:141] libmachine: (addons-554886) Calling .GetConfigRaw
	I0911 10:57:13.594746 2222784 main.go:141] libmachine: (addons-554886) Calling .GetIP
	I0911 10:57:13.597543 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.597980 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.598020 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.598346 2222784 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/config.json ...
	I0911 10:57:13.598530 2222784 start.go:128] duration metric: createHost completed in 23.942310791s
	I0911 10:57:13.598555 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:13.600595 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.601023 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.601054 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:13.601058 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.601244 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.601405 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.601552 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:13.601746 2222784 main.go:141] libmachine: Using SSH client type: native
	I0911 10:57:13.602242 2222784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0911 10:57:13.602258 2222784 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 10:57:13.729864 2222784 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694429833.705405453
	
	I0911 10:57:13.729889 2222784 fix.go:206] guest clock: 1694429833.705405453
	I0911 10:57:13.729900 2222784 fix.go:219] Guest: 2023-09-11 10:57:13.705405453 +0000 UTC Remote: 2023-09-11 10:57:13.598542808 +0000 UTC m=+24.055516436 (delta=106.862645ms)
	I0911 10:57:13.729960 2222784 fix.go:190] guest clock delta is within tolerance: 106.862645ms
	I0911 10:57:13.729972 2222784 start.go:83] releasing machines lock for "addons-554886", held for 24.073863036s
	I0911 10:57:13.730019 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:13.730338 2222784 main.go:141] libmachine: (addons-554886) Calling .GetIP
	I0911 10:57:13.733133 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.733502 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.733535 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.733665 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:13.734177 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:13.734343 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:13.734431 2222784 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 10:57:13.734493 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:13.734626 2222784 ssh_runner.go:195] Run: cat /version.json
	I0911 10:57:13.734657 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:13.737112 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.737433 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.737486 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.737522 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.737644 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:13.737858 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.737929 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.737956 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.738026 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:13.738106 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:13.738194 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:13.738251 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.738360 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:13.738496 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:13.826349 2222784 ssh_runner.go:195] Run: systemctl --version
	I0911 10:57:13.853408 2222784 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 10:57:14.027489 2222784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 10:57:14.034545 2222784 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 10:57:14.034643 2222784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 10:57:14.051149 2222784 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 10:57:14.051182 2222784 start.go:466] detecting cgroup driver to use...
	I0911 10:57:14.051256 2222784 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 10:57:14.064682 2222784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 10:57:14.077113 2222784 docker.go:196] disabling cri-docker service (if available) ...
	I0911 10:57:14.077190 2222784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 10:57:14.089823 2222784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 10:57:14.102705 2222784 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 10:57:14.208601 2222784 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 10:57:14.331860 2222784 docker.go:212] disabling docker service ...
	I0911 10:57:14.331950 2222784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 10:57:14.346612 2222784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 10:57:14.360206 2222784 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 10:57:14.471783 2222784 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 10:57:14.583587 2222784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 10:57:14.597510 2222784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 10:57:14.615349 2222784 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 10:57:14.615412 2222784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 10:57:14.625912 2222784 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 10:57:14.625987 2222784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 10:57:14.636603 2222784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 10:57:14.646811 2222784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 10:57:14.657349 2222784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 10:57:14.668487 2222784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 10:57:14.677746 2222784 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 10:57:14.677814 2222784 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 10:57:14.691718 2222784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 10:57:14.701671 2222784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 10:57:14.810507 2222784 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 10:57:14.987239 2222784 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 10:57:14.987352 2222784 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 10:57:14.992560 2222784 start.go:534] Will wait 60s for crictl version
	I0911 10:57:14.992652 2222784 ssh_runner.go:195] Run: which crictl
	I0911 10:57:14.996637 2222784 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 10:57:15.027885 2222784 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 10:57:15.027998 2222784 ssh_runner.go:195] Run: crio --version
	I0911 10:57:15.072559 2222784 ssh_runner.go:195] Run: crio --version
	I0911 10:57:15.120762 2222784 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 10:57:15.122568 2222784 main.go:141] libmachine: (addons-554886) Calling .GetIP
	I0911 10:57:15.125498 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:15.125967 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:15.126002 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:15.126236 2222784 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 10:57:15.130729 2222784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 10:57:15.143465 2222784 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 10:57:15.143538 2222784 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 10:57:15.178122 2222784 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 10:57:15.178212 2222784 ssh_runner.go:195] Run: which lz4
	I0911 10:57:15.182388 2222784 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 10:57:15.187175 2222784 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 10:57:15.187210 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 10:57:17.109170 2222784 crio.go:444] Took 1.926813 seconds to copy over tarball
	I0911 10:57:17.109251 2222784 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 10:57:20.373708 2222784 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.264420589s)
	I0911 10:57:20.373741 2222784 crio.go:451] Took 3.264540 seconds to extract the tarball
	I0911 10:57:20.373754 2222784 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 10:57:20.418003 2222784 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 10:57:20.479160 2222784 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 10:57:20.479188 2222784 cache_images.go:84] Images are preloaded, skipping loading
	I0911 10:57:20.479266 2222784 ssh_runner.go:195] Run: crio config
	I0911 10:57:20.546922 2222784 cni.go:84] Creating CNI manager for ""
	I0911 10:57:20.546958 2222784 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 10:57:20.546980 2222784 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 10:57:20.547035 2222784 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-554886 NodeName:addons-554886 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 10:57:20.547211 2222784 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-554886"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 10:57:20.547325 2222784 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-554886 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-554886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 10:57:20.547403 2222784 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 10:57:20.558100 2222784 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 10:57:20.558180 2222784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 10:57:20.568005 2222784 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0911 10:57:20.585110 2222784 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 10:57:20.602034 2222784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0911 10:57:20.619459 2222784 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0911 10:57:20.623465 2222784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 10:57:20.635536 2222784 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886 for IP: 192.168.39.217
	I0911 10:57:20.635570 2222784 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.635768 2222784 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 10:57:20.737723 2222784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt ...
	I0911 10:57:20.737757 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt: {Name:mk3bdf40aaa3e971cbfc0bb665325eb0a5ce86d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.737936 2222784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key ...
	I0911 10:57:20.737948 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key: {Name:mkba3109852a7b32eb1bd9b47bfb518624795727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.738024 2222784 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 10:57:20.838028 2222784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt ...
	I0911 10:57:20.838074 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt: {Name:mk0a269b1262311a1d3492bb27a6644ac573d500 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.838281 2222784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key ...
	I0911 10:57:20.838296 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key: {Name:mk3f002372bc48948e14f9b7fb04e041aabdf242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.838402 2222784 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.key
	I0911 10:57:20.838416 2222784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt with IP's: []
	I0911 10:57:20.971188 2222784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt ...
	I0911 10:57:20.971228 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: {Name:mk2b361d6ec44224f0767ee31fd839a9e614ba85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.971455 2222784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.key ...
	I0911 10:57:20.971471 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.key: {Name:mk1acf4568d3df9938cb70ff61f23299e82ed04b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.971574 2222784 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.key.891f873f
	I0911 10:57:20.971596 2222784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.crt.891f873f with IP's: [192.168.39.217 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 10:57:21.405430 2222784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.crt.891f873f ...
	I0911 10:57:21.405469 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.crt.891f873f: {Name:mkf7a8c2c8249ef121fad574998703f4a9aa9102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:21.405676 2222784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.key.891f873f ...
	I0911 10:57:21.405692 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.key.891f873f: {Name:mk0195efc836f6102a964acbf9831aec9ea7f2e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:21.405798 2222784 certs.go:337] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.crt.891f873f -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.crt
	I0911 10:57:21.405915 2222784 certs.go:341] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.key.891f873f -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.key
	I0911 10:57:21.405989 2222784 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.key
	I0911 10:57:21.406016 2222784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.crt with IP's: []
	I0911 10:57:21.559910 2222784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.crt ...
	I0911 10:57:21.559946 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.crt: {Name:mk10f652c4ac947cb6aa5ca6e0a1aa76dbe78ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:21.560159 2222784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.key ...
	I0911 10:57:21.560175 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.key: {Name:mk379e93aebb290122e9527116a9e359bea84285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:21.560394 2222784 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 10:57:21.560436 2222784 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 10:57:21.560491 2222784 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 10:57:21.560517 2222784 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 10:57:21.561223 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 10:57:21.586545 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 10:57:21.610582 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 10:57:21.634275 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 10:57:21.657539 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 10:57:21.682338 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 10:57:21.707666 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 10:57:21.732625 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 10:57:21.756473 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 10:57:21.780561 2222784 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 10:57:21.797543 2222784 ssh_runner.go:195] Run: openssl version
	I0911 10:57:21.803539 2222784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 10:57:21.814049 2222784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 10:57:21.818845 2222784 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 10:57:21.818917 2222784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 10:57:21.824447 2222784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 10:57:21.835378 2222784 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 10:57:21.839777 2222784 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 10:57:21.839881 2222784 kubeadm.go:404] StartCluster: {Name:addons-554886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:addons-554886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 10:57:21.840057 2222784 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 10:57:21.840114 2222784 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 10:57:21.874227 2222784 cri.go:89] found id: ""
	I0911 10:57:21.874305 2222784 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 10:57:21.885394 2222784 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 10:57:21.895710 2222784 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 10:57:21.906283 2222784 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 10:57:21.906339 2222784 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 10:57:22.105483 2222784 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 10:57:34.864226 2222784 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 10:57:34.864306 2222784 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 10:57:34.864429 2222784 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 10:57:34.864559 2222784 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 10:57:34.864714 2222784 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 10:57:34.864823 2222784 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 10:57:34.866745 2222784 out.go:204]   - Generating certificates and keys ...
	I0911 10:57:34.866832 2222784 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 10:57:34.866936 2222784 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 10:57:34.867050 2222784 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 10:57:34.867141 2222784 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 10:57:34.867232 2222784 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 10:57:34.867329 2222784 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 10:57:34.867407 2222784 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 10:57:34.867533 2222784 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-554886 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0911 10:57:34.867633 2222784 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 10:57:34.867826 2222784 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-554886 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0911 10:57:34.867932 2222784 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 10:57:34.868012 2222784 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 10:57:34.868055 2222784 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 10:57:34.868102 2222784 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 10:57:34.868145 2222784 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 10:57:34.868191 2222784 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 10:57:34.868244 2222784 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 10:57:34.868291 2222784 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 10:57:34.868376 2222784 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 10:57:34.868460 2222784 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 10:57:34.870278 2222784 out.go:204]   - Booting up control plane ...
	I0911 10:57:34.870419 2222784 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 10:57:34.870518 2222784 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 10:57:34.870623 2222784 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 10:57:34.870767 2222784 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 10:57:34.870844 2222784 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 10:57:34.870878 2222784 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 10:57:34.871062 2222784 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 10:57:34.871150 2222784 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002903 seconds
	I0911 10:57:34.871301 2222784 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 10:57:34.871456 2222784 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 10:57:34.871543 2222784 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 10:57:34.871817 2222784 kubeadm.go:322] [mark-control-plane] Marking the node addons-554886 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 10:57:34.871919 2222784 kubeadm.go:322] [bootstrap-token] Using token: c827vt.rel7mk8dgs8gzzvy
	I0911 10:57:34.873583 2222784 out.go:204]   - Configuring RBAC rules ...
	I0911 10:57:34.873746 2222784 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 10:57:34.873849 2222784 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 10:57:34.873989 2222784 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 10:57:34.874138 2222784 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 10:57:34.874243 2222784 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 10:57:34.874372 2222784 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 10:57:34.874518 2222784 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 10:57:34.874574 2222784 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 10:57:34.874628 2222784 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 10:57:34.874641 2222784 kubeadm.go:322] 
	I0911 10:57:34.874723 2222784 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 10:57:34.874730 2222784 kubeadm.go:322] 
	I0911 10:57:34.874827 2222784 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 10:57:34.874849 2222784 kubeadm.go:322] 
	I0911 10:57:34.874889 2222784 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 10:57:34.874966 2222784 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 10:57:34.875044 2222784 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 10:57:34.875056 2222784 kubeadm.go:322] 
	I0911 10:57:34.875137 2222784 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 10:57:34.875146 2222784 kubeadm.go:322] 
	I0911 10:57:34.875225 2222784 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 10:57:34.875235 2222784 kubeadm.go:322] 
	I0911 10:57:34.875282 2222784 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 10:57:34.875367 2222784 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 10:57:34.875447 2222784 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 10:57:34.875460 2222784 kubeadm.go:322] 
	I0911 10:57:34.875572 2222784 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 10:57:34.875684 2222784 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 10:57:34.875694 2222784 kubeadm.go:322] 
	I0911 10:57:34.875801 2222784 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token c827vt.rel7mk8dgs8gzzvy \
	I0911 10:57:34.875887 2222784 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 10:57:34.875910 2222784 kubeadm.go:322] 	--control-plane 
	I0911 10:57:34.875916 2222784 kubeadm.go:322] 
	I0911 10:57:34.875996 2222784 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 10:57:34.876009 2222784 kubeadm.go:322] 
	I0911 10:57:34.876112 2222784 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token c827vt.rel7mk8dgs8gzzvy \
	I0911 10:57:34.876284 2222784 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 10:57:34.876301 2222784 cni.go:84] Creating CNI manager for ""
	I0911 10:57:34.876308 2222784 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 10:57:34.878270 2222784 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 10:57:34.879867 2222784 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 10:57:34.947968 2222784 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 10:57:35.006384 2222784 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 10:57:35.006505 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:35.006526 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=addons-554886 minikube.k8s.io/updated_at=2023_09_11T10_57_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:35.040722 2222784 ops.go:34] apiserver oom_adj: -16
	I0911 10:57:35.218917 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:35.318269 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:35.915863 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:36.415839 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:36.916099 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:37.415304 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:37.915424 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:38.415957 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:38.915596 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:39.415973 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:39.915166 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:40.415249 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:40.915495 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:41.415898 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:41.916006 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:42.415317 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:42.915377 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:43.415684 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:43.916133 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:44.415863 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:44.915891 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:45.415492 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:45.915168 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:46.415858 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:46.915914 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:47.049137 2222784 kubeadm.go:1081] duration metric: took 12.042707541s to wait for elevateKubeSystemPrivileges.
	I0911 10:57:47.049173 2222784 kubeadm.go:406] StartCluster complete in 25.209305474s
	I0911 10:57:47.049200 2222784 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:47.049408 2222784 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 10:57:47.049953 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:47.050235 2222784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 10:57:47.050246 2222784 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0911 10:57:47.050421 2222784 addons.go:69] Setting helm-tiller=true in profile "addons-554886"
	I0911 10:57:47.050442 2222784 addons.go:69] Setting metrics-server=true in profile "addons-554886"
	I0911 10:57:47.050447 2222784 addons.go:69] Setting inspektor-gadget=true in profile "addons-554886"
	I0911 10:57:47.050480 2222784 addons.go:231] Setting addon metrics-server=true in "addons-554886"
	I0911 10:57:47.050483 2222784 addons.go:231] Setting addon helm-tiller=true in "addons-554886"
	I0911 10:57:47.050465 2222784 addons.go:69] Setting ingress=true in profile "addons-554886"
	I0911 10:57:47.050487 2222784 config.go:182] Loaded profile config "addons-554886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 10:57:47.050504 2222784 addons.go:69] Setting registry=true in profile "addons-554886"
	I0911 10:57:47.050512 2222784 addons.go:231] Setting addon ingress=true in "addons-554886"
	I0911 10:57:47.050517 2222784 addons.go:231] Setting addon registry=true in "addons-554886"
	I0911 10:57:47.050490 2222784 addons.go:69] Setting ingress-dns=true in profile "addons-554886"
	I0911 10:57:47.050567 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.050579 2222784 addons.go:69] Setting cloud-spanner=true in profile "addons-554886"
	I0911 10:57:47.050579 2222784 addons.go:69] Setting default-storageclass=true in profile "addons-554886"
	I0911 10:57:47.050588 2222784 addons.go:69] Setting gcp-auth=true in profile "addons-554886"
	I0911 10:57:47.050591 2222784 addons.go:231] Setting addon cloud-spanner=true in "addons-554886"
	I0911 10:57:47.050590 2222784 addons.go:69] Setting storage-provisioner=true in profile "addons-554886"
	I0911 10:57:47.050598 2222784 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-554886"
	I0911 10:57:47.050604 2222784 mustload.go:65] Loading cluster: addons-554886
	I0911 10:57:47.050619 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.050619 2222784 addons.go:231] Setting addon storage-provisioner=true in "addons-554886"
	I0911 10:57:47.050484 2222784 addons.go:231] Setting addon inspektor-gadget=true in "addons-554886"
	I0911 10:57:47.050800 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.050803 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.050815 2222784 config.go:182] Loaded profile config "addons-554886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 10:57:47.051115 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051115 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.050568 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.050567 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.051156 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.051167 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051197 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.050580 2222784 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-554886"
	I0911 10:57:47.051296 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051318 2222784 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-554886"
	I0911 10:57:47.051357 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.051360 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.051392 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051415 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.051478 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051486 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051498 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.051514 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.050428 2222784 addons.go:69] Setting volumesnapshots=true in profile "addons-554886"
	I0911 10:57:47.051556 2222784 addons.go:231] Setting addon volumesnapshots=true in "addons-554886"
	I0911 10:57:47.051591 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.051116 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051647 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.050570 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.051710 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051742 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.050571 2222784 addons.go:231] Setting addon ingress-dns=true in "addons-554886"
	I0911 10:57:47.051842 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.051279 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.051926 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051953 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.052057 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.052084 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.071966 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44795
	I0911 10:57:47.071983 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0911 10:57:47.071965 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I0911 10:57:47.072421 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.072490 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.073116 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.073137 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.073120 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.073163 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.073571 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.073618 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0911 10:57:47.074170 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.074222 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.074260 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.074262 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.074730 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.074759 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.075117 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.075305 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.081120 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.081180 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.081426 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.081471 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.081490 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.081510 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0911 10:57:47.081560 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.081931 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.081963 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.082033 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.082053 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.082101 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.082559 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.082593 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.085959 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0911 10:57:47.086534 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.087153 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.087190 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.087555 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.088096 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.088144 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.091219 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0911 10:57:47.091831 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.092462 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.092480 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.092906 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.093146 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.093824 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.094450 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I0911 10:57:47.094678 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.094696 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.095188 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.095299 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.096009 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.096028 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.096238 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.096281 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.096387 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.096545 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.096611 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I0911 10:57:47.097517 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.099457 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.099477 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.100061 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.100663 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.100710 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.100883 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.101195 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45333
	I0911 10:57:47.103766 2222784 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0911 10:57:47.101815 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.102668 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46223
	I0911 10:57:47.104360 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I0911 10:57:47.105599 2222784 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 10:57:47.105614 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 10:57:47.105639 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.106054 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.106313 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.106330 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.106634 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.106765 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.107438 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.107486 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.107889 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.107908 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.108429 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.109032 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.109075 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.109216 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.109352 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.109366 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.109831 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.109903 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.109948 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.109985 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.110212 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.110649 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.110687 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.110822 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.110996 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.113642 2222784 addons.go:231] Setting addon default-storageclass=true in "addons-554886"
	I0911 10:57:47.113693 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.114058 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.114105 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.126652 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42407
	I0911 10:57:47.126846 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34589
	I0911 10:57:47.127034 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0911 10:57:47.127567 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.127669 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.128220 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.128420 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.128432 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.128784 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.128805 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.128894 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0911 10:57:47.128995 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.129279 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.129299 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.129365 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.129434 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.129484 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.130114 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.130137 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.130358 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.130881 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.131070 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.131070 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.131694 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.131775 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.134318 2222784 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0911 10:57:47.132541 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.133371 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.133859 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.136190 2222784 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0911 10:57:47.136212 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0911 10:57:47.136234 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.138197 2222784 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.2
	I0911 10:57:47.136445 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41473
	I0911 10:57:47.138697 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I0911 10:57:47.138788 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I0911 10:57:47.139681 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.140337 2222784 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0911 10:57:47.142127 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0911 10:57:47.142146 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0911 10:57:47.142169 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.140360 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.142247 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.140245 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.140294 2222784 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 10:57:47.140996 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.141052 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.141495 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.142479 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.142733 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0911 10:57:47.144329 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44831
	I0911 10:57:47.144351 2222784 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 10:57:47.144518 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 10:57:47.144542 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.144637 2222784 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0911 10:57:47.146273 2222784 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0911 10:57:47.144896 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.144953 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.145556 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.145589 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44519
	I0911 10:57:47.145699 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.145770 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.145778 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.145878 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.146737 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.148077 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.148255 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.148370 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.148422 2222784 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0911 10:57:47.148440 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0911 10:57:47.148461 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.148424 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.148439 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.148503 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.148854 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.148864 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.148915 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.148933 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.148947 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.148961 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.148958 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.149020 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.149064 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.149080 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.149084 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.149136 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.149426 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.149666 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.149932 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.149988 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.150009 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.150039 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.150394 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.150816 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.150979 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.150999 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.151076 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.151385 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.151481 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.151596 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.153173 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.153358 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.153590 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.156060 2222784 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0911 10:57:47.154240 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.154288 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.156297 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.156315 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.156339 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.156914 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.157191 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36629
	I0911 10:57:47.157662 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.157671 2222784 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0911 10:57:47.157681 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0911 10:57:47.157686 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.157695 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.158096 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.158093 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.159615 2222784 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0911 10:57:47.158286 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.158527 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.160759 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.161131 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0911 10:57:47.161295 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.161468 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.162598 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0911 10:57:47.164090 2222784 out.go:177]   - Using image docker.io/registry:2.8.1
	I0911 10:57:47.163120 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.162662 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.164781 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.165779 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0911 10:57:47.165894 2222784 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0911 10:57:47.165910 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.167459 2222784 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0911 10:57:47.167479 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0911 10:57:47.167498 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.167506 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.167560 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0911 10:57:47.170596 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.169167 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.170605 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0911 10:57:47.167800 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.168806 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0911 10:57:47.167718 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.170938 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.171117 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.172441 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.174208 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0911 10:57:47.172583 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.172656 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.173149 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.173158 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.174035 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.174577 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.177470 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0911 10:57:47.175916 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.175945 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.175969 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.176069 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.176342 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.179095 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.180877 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0911 10:57:47.179222 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.179493 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.179539 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.179697 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.184067 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0911 10:57:47.182641 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.182697 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.187132 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0911 10:57:47.187785 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.188841 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0911 10:57:47.188908 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0911 10:57:47.188939 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.191064 2222784 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0911 10:57:47.192728 2222784 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0911 10:57:47.191870 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.192747 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0911 10:57:47.192568 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.192769 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.192772 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.192798 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.192958 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.193071 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.193167 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.195677 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.196097 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.196127 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.196253 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.196450 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.196611 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.196790 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.199711 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0911 10:57:47.200189 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.200680 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.200703 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.201036 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.201236 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.202689 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.202933 2222784 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 10:57:47.202951 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 10:57:47.202970 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.205763 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.206235 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.206264 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.206489 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.206653 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.206826 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.206975 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.251512 2222784 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-554886" context rescaled to 1 replicas
	I0911 10:57:47.251559 2222784 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 10:57:47.254075 2222784 out.go:177] * Verifying Kubernetes components...
	I0911 10:57:47.255792 2222784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 10:57:47.474561 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 10:57:47.476120 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0911 10:57:47.476157 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0911 10:57:47.488572 2222784 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0911 10:57:47.488607 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0911 10:57:47.526850 2222784 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 10:57:47.526873 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0911 10:57:47.535541 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0911 10:57:47.545357 2222784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 10:57:47.546159 2222784 node_ready.go:35] waiting up to 6m0s for node "addons-554886" to be "Ready" ...
	I0911 10:57:47.560182 2222784 node_ready.go:49] node "addons-554886" has status "Ready":"True"
	I0911 10:57:47.560214 2222784 node_ready.go:38] duration metric: took 14.028725ms waiting for node "addons-554886" to be "Ready" ...
	I0911 10:57:47.560227 2222784 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 10:57:47.578753 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0911 10:57:47.583979 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 10:57:47.601034 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0911 10:57:47.601061 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0911 10:57:47.607288 2222784 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0911 10:57:47.607315 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0911 10:57:47.609842 2222784 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0911 10:57:47.609865 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0911 10:57:47.620394 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0911 10:57:47.622537 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0911 10:57:47.622561 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0911 10:57:47.646978 2222784 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0911 10:57:47.647011 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0911 10:57:47.659367 2222784 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 10:57:47.659397 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 10:57:47.674901 2222784 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace to be "Ready" ...
	I0911 10:57:47.717115 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0911 10:57:47.717143 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0911 10:57:47.840573 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0911 10:57:47.840599 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0911 10:57:47.842600 2222784 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0911 10:57:47.842620 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0911 10:57:47.905975 2222784 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0911 10:57:47.906006 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0911 10:57:47.920946 2222784 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 10:57:47.920976 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 10:57:47.930968 2222784 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0911 10:57:47.930995 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0911 10:57:47.934699 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0911 10:57:47.934723 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0911 10:57:48.245609 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0911 10:57:48.292944 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0911 10:57:48.298530 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 10:57:48.303414 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0911 10:57:48.303440 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0911 10:57:48.308184 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0911 10:57:48.308211 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0911 10:57:48.319567 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0911 10:57:48.319594 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0911 10:57:48.350383 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0911 10:57:48.350416 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0911 10:57:48.399621 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0911 10:57:48.399652 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0911 10:57:48.433534 2222784 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0911 10:57:48.433562 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0911 10:57:48.444367 2222784 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0911 10:57:48.444397 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0911 10:57:48.509369 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0911 10:57:48.509398 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0911 10:57:48.545518 2222784 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0911 10:57:48.545550 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0911 10:57:48.559893 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0911 10:57:48.601433 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0911 10:57:48.601463 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0911 10:57:48.618779 2222784 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0911 10:57:48.618811 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0911 10:57:48.691957 2222784 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0911 10:57:48.691987 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0911 10:57:48.692060 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0911 10:57:48.749965 2222784 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0911 10:57:48.750000 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0911 10:57:48.817989 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0911 10:57:50.649963 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:57:51.897287 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.42268586s)
	I0911 10:57:51.897358 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:51.897375 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:51.897757 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:51.897834 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:51.897876 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:51.897893 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:51.897799 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:51.898143 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:51.898175 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:51.898195 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:51.898209 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:51.898419 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:51.898434 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:53.188237 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.652645319s)
	I0911 10:57:53.188311 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:53.188330 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:53.188351 2222784 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.642955122s)
	I0911 10:57:53.188391 2222784 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0911 10:57:53.188889 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:53.188910 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:53.188921 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:53.188931 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:53.189228 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:53.189256 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:53.189234 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:53.245809 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:57:54.208377 2222784 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0911 10:57:54.208432 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:54.212265 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:54.212748 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:54.212786 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:54.213019 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:54.213295 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:54.213492 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:54.213709 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:54.626241 2222784 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0911 10:57:54.655335 2222784 addons.go:231] Setting addon gcp-auth=true in "addons-554886"
	I0911 10:57:54.655415 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:54.655772 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:54.655831 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:54.671666 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44433
	I0911 10:57:54.672147 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:54.672726 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:54.672762 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:54.673209 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:54.673924 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:54.673985 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:54.690497 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I0911 10:57:54.691034 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:54.691676 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:54.691697 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:54.692078 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:54.692299 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:54.694284 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:54.694567 2222784 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0911 10:57:54.694606 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:54.697550 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:54.697970 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:54.698006 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:54.698168 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:54.698390 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:54.698576 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:54.698757 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:55.423951 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:57:56.443860 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.859844031s)
	I0911 10:57:56.443890 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.865094384s)
	I0911 10:57:56.443931 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.443933 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.443955 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.443978 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.443981 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.823551965s)
	I0911 10:57:56.444015 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444033 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444069 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.198427892s)
	I0911 10:57:56.444091 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444101 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444152 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.15117404s)
	I0911 10:57:56.444248 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.145684356s)
	I0911 10:57:56.444473 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.752378074s)
	I0911 10:57:56.444495 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444502 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444512 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444386 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.884455517s)
	I0911 10:57:56.444516 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	W0911 10:57:56.444558 2222784 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0911 10:57:56.444475 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444597 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444606 2222784 retry.go:31] will retry after 311.309959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0911 10:57:56.444806 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.444833 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.444835 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.444843 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444852 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444858 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.444888 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.444896 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.444906 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444914 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444927 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.444936 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.444945 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444953 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444997 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.445012 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.445021 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.445031 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.445166 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.445178 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.445188 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.445196 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.445276 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.445299 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.445307 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.445657 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.445692 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.445705 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.445741 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.446377 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.446395 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.446407 2222784 addons.go:467] Verifying addon registry=true in "addons-554886"
	I0911 10:57:56.449802 2222784 out.go:177] * Verifying registry addon...
	I0911 10:57:56.446806 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.446836 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.446870 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.446892 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.446907 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.446926 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.447007 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.447029 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.451319 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.451355 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.451357 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.451366 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.451368 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.451376 2222784 addons.go:467] Verifying addon ingress=true in "addons-554886"
	I0911 10:57:56.451405 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.451424 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.451377 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.453090 2222784 out.go:177] * Verifying ingress addon...
	I0911 10:57:56.451750 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.451756 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.451771 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.451792 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.452471 2222784 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0911 10:57:56.454696 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.454723 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.454755 2222784 addons.go:467] Verifying addon metrics-server=true in "addons-554886"
	I0911 10:57:56.455387 2222784 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0911 10:57:56.476306 2222784 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0911 10:57:56.476338 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:56.479667 2222784 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0911 10:57:56.479699 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:56.496088 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:56.496501 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:56.757106 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0911 10:57:57.102705 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:57.198223 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:57.375881 2222784 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.681278444s)
	I0911 10:57:57.375940 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.557882874s)
	I0911 10:57:57.378022 2222784 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0911 10:57:57.375998 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:57.379560 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:57.381293 2222784 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0911 10:57:57.379967 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:57.380003 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:57.382772 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:57.382800 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:57.382819 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:57.382858 2222784 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0911 10:57:57.382882 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0911 10:57:57.383087 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:57.383139 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:57.383158 2222784 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-554886"
	I0911 10:57:57.384757 2222784 out.go:177] * Verifying csi-hostpath-driver addon...
	I0911 10:57:57.387120 2222784 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0911 10:57:57.449040 2222784 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0911 10:57:57.449068 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0911 10:57:57.452650 2222784 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0911 10:57:57.452675 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:57:57.469985 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:57:57.499030 2222784 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0911 10:57:57.499067 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0911 10:57:57.507869 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:57.510663 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:57.525368 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0911 10:57:57.832205 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:57:57.977493 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:57:58.069106 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:58.069186 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:58.480276 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:57:58.510182 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:58.514574 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:59.013654 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:57:59.064028 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:59.065494 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:59.399161 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.641967973s)
	I0911 10:57:59.399265 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:59.399288 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:59.399708 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:59.399730 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:59.399746 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:59.399762 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:59.399995 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:59.400013 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:59.490944 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:57:59.526657 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:59.526990 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:59.715486 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.190059706s)
	I0911 10:57:59.715563 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:59.715609 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:59.716132 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:59.716155 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:59.716166 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:59.716175 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:59.716215 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:59.716547 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:59.716564 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:59.716569 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:59.718685 2222784 addons.go:467] Verifying addon gcp-auth=true in "addons-554886"
	I0911 10:57:59.720849 2222784 out.go:177] * Verifying gcp-auth addon...
	I0911 10:57:59.723523 2222784 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0911 10:57:59.761356 2222784 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0911 10:57:59.761381 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:57:59.788147 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:57:59.986976 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:00.014888 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:00.014894 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:00.298192 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:00.301635 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:00.477077 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:00.502601 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:00.504351 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:00.803672 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:00.982323 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:01.008500 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:01.009449 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:01.311269 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:01.480125 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:01.503487 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:01.504185 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:01.802004 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:01.976228 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:02.003570 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:02.003742 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:02.296963 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:02.477085 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:02.504665 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:02.506604 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:02.793642 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:02.794659 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:02.976382 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:03.003788 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:03.004330 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:03.293482 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:03.478382 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:03.504044 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:03.505346 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:03.795206 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:03.980426 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:04.026241 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:04.026251 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:04.312506 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:04.479695 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:04.502951 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:04.503437 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:04.807023 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:04.808462 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:04.993860 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:05.030610 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:05.035528 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:05.296491 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:05.483035 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:05.504008 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:05.504045 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:05.794777 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:05.977304 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:06.007413 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:06.008772 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:06.322715 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:06.480144 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:06.502312 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:06.508620 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:06.793288 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:06.975858 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:07.005458 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:07.009212 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:07.298679 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:07.302394 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:07.476141 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:07.510687 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:07.514128 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:07.792928 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:07.975808 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:08.031517 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:08.033842 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:08.293861 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:08.476945 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:08.506223 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:08.506522 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:08.800972 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:08.985556 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:09.004935 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:09.005047 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:09.301734 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:09.313811 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:09.479405 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:09.519266 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:09.519534 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:09.797931 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:09.981036 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:10.012391 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:10.015694 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:10.294665 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:10.476466 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:10.510874 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:10.512422 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:10.801521 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:11.399467 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:11.401355 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:11.404364 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:11.405093 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:11.515099 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:11.553234 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:11.585665 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:11.585920 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:11.803479 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:11.987560 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:12.005278 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:12.012613 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:12.293939 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:12.478848 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:12.504170 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:12.504322 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:12.821884 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:12.978908 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:13.006174 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:13.006341 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:13.296404 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:13.476446 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:13.503658 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:13.504946 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:13.795014 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:13.809313 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:13.976736 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:14.004875 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:14.009615 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:14.292891 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:14.476016 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:14.507970 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:14.508582 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:14.800694 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:14.976492 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:15.004526 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:15.010056 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:15.293161 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:15.477618 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:15.502949 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:15.504002 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:15.797783 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:15.975689 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:16.012104 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:16.012710 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:16.299343 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:16.299547 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:16.485185 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:16.506709 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:16.513680 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:16.805119 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:16.976470 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:17.006232 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:17.007871 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:17.296107 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:17.477257 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:17.511135 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:17.513042 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:17.793749 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:17.976313 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:18.004354 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:18.005021 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:18.292917 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:18.477101 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:18.503129 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:18.507936 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:18.804237 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:18.805281 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:18.977845 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:19.003361 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:19.003432 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:19.295065 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:19.478841 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:19.502147 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:19.503084 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:19.852914 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:19.976783 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:20.004669 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:20.012613 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:20.292677 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:20.478029 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:20.502968 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:20.511012 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:20.800018 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:20.806517 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:20.979634 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:21.003053 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:21.003664 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:21.293742 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:21.478336 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:21.502841 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:21.502871 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:21.794124 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:22.044221 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:22.045317 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:22.045872 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:22.292893 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:22.479063 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:22.501955 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:22.502388 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:22.953166 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:22.956278 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:22.978957 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:23.004914 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:23.005099 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:23.294457 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:23.479523 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:23.501283 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:23.502741 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:23.798397 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:23.976806 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:24.005986 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:24.006097 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:24.294064 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:24.476893 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:24.504499 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:24.507740 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:24.795802 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:24.977747 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:25.003099 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:25.004911 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:25.296148 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:25.299533 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:25.479573 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:25.504607 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:25.504756 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:25.801011 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:25.985024 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:26.006579 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:26.007317 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:26.301569 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:26.476888 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:26.501642 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:26.501973 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:26.795744 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:26.977708 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:27.003559 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:27.006708 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:27.293803 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:27.477327 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:27.502604 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:27.502916 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:27.795812 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:27.798995 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:27.982368 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:28.003200 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:28.004627 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:28.311165 2222784 pod_ready.go:92] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"True"
	I0911 10:58:28.311202 2222784 pod_ready.go:81] duration metric: took 40.636263094s waiting for pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.311217 2222784 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nprn6" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.312082 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:28.316235 2222784 pod_ready.go:97] error getting pod "coredns-5dd5756b68-nprn6" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-nprn6" not found
	I0911 10:58:28.316274 2222784 pod_ready.go:81] duration metric: took 5.048768ms waiting for pod "coredns-5dd5756b68-nprn6" in "kube-system" namespace to be "Ready" ...
	E0911 10:58:28.316289 2222784 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-nprn6" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-nprn6" not found
	I0911 10:58:28.316301 2222784 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.329937 2222784 pod_ready.go:92] pod "etcd-addons-554886" in "kube-system" namespace has status "Ready":"True"
	I0911 10:58:28.329970 2222784 pod_ready.go:81] duration metric: took 13.661212ms waiting for pod "etcd-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.329987 2222784 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.339437 2222784 pod_ready.go:92] pod "kube-apiserver-addons-554886" in "kube-system" namespace has status "Ready":"True"
	I0911 10:58:28.339469 2222784 pod_ready.go:81] duration metric: took 9.474337ms waiting for pod "kube-apiserver-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.339486 2222784 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.369563 2222784 pod_ready.go:92] pod "kube-controller-manager-addons-554886" in "kube-system" namespace has status "Ready":"True"
	I0911 10:58:28.369601 2222784 pod_ready.go:81] duration metric: took 30.106704ms waiting for pod "kube-controller-manager-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.369618 2222784 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-96wzg" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.483062 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:28.493980 2222784 pod_ready.go:92] pod "kube-proxy-96wzg" in "kube-system" namespace has status "Ready":"True"
	I0911 10:58:28.494011 2222784 pod_ready.go:81] duration metric: took 124.382695ms waiting for pod "kube-proxy-96wzg" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.494025 2222784 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.505107 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:28.506039 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:28.872892 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:28.891008 2222784 pod_ready.go:92] pod "kube-scheduler-addons-554886" in "kube-system" namespace has status "Ready":"True"
	I0911 10:58:28.891034 2222784 pod_ready.go:81] duration metric: took 397.00219ms waiting for pod "kube-scheduler-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.891043 2222784 pod_ready.go:38] duration metric: took 41.330801285s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 10:58:28.891064 2222784 api_server.go:52] waiting for apiserver process to appear ...
	I0911 10:58:28.891128 2222784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 10:58:28.932638 2222784 api_server.go:72] duration metric: took 41.681036941s to wait for apiserver process to appear ...
	I0911 10:58:28.932678 2222784 api_server.go:88] waiting for apiserver healthz status ...
	I0911 10:58:28.932697 2222784 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0911 10:58:28.938067 2222784 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0911 10:58:28.939499 2222784 api_server.go:141] control plane version: v1.28.1
	I0911 10:58:28.939526 2222784 api_server.go:131] duration metric: took 6.840014ms to wait for apiserver health ...
	I0911 10:58:28.939536 2222784 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 10:58:28.976323 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:29.002935 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:29.004714 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:29.105285 2222784 system_pods.go:59] 17 kube-system pods found
	I0911 10:58:29.105320 2222784 system_pods.go:61] "coredns-5dd5756b68-2cg8c" [a229e351-155b-4d57-9746-e272bb98598b] Running
	I0911 10:58:29.105329 2222784 system_pods.go:61] "csi-hostpath-attacher-0" [245e9000-d196-429f-bf8a-ecced1fb4a71] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0911 10:58:29.105338 2222784 system_pods.go:61] "csi-hostpath-resizer-0" [62c4130b-1a92-424a-a665-557da4d3f75b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0911 10:58:29.105346 2222784 system_pods.go:61] "csi-hostpathplugin-nwdhc" [239b8e34-6457-4c49-8ad7-1947faae7550] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0911 10:58:29.105351 2222784 system_pods.go:61] "etcd-addons-554886" [de10e470-2588-4a7f-8e8e-8c84386ee6c5] Running
	I0911 10:58:29.105356 2222784 system_pods.go:61] "kube-apiserver-addons-554886" [c8aff2d0-df06-48cd-a21b-e1b060e3be2d] Running
	I0911 10:58:29.105360 2222784 system_pods.go:61] "kube-controller-manager-addons-554886" [1480e1eb-ad72-4c18-a9a8-a2528659fbf1] Running
	I0911 10:58:29.105367 2222784 system_pods.go:61] "kube-ingress-dns-minikube" [3715ae8a-f6d7-4bfc-b92c-a3586056893e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0911 10:58:29.105371 2222784 system_pods.go:61] "kube-proxy-96wzg" [0655ce43-1406-45df-96a8-df0f9f378891] Running
	I0911 10:58:29.105375 2222784 system_pods.go:61] "kube-scheduler-addons-554886" [b2c82861-60fb-45da-8a82-d487c1c1301c] Running
	I0911 10:58:29.105381 2222784 system_pods.go:61] "metrics-server-7c66d45ddc-7krqz" [68915a10-f10d-4296-8a14-8c21f7f71a42] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 10:58:29.105389 2222784 system_pods.go:61] "registry-proxy-lmsgk" [c3d6d669-7454-4529-b9ac-06abb4face91] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0911 10:58:29.105398 2222784 system_pods.go:61] "registry-t6754" [8531b6ac-003f-4a6d-aab4-67819497ab11] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0911 10:58:29.105406 2222784 system_pods.go:61] "snapshot-controller-58dbcc7b99-2nql9" [e0c9b597-80fb-4724-8eef-0e970bed2638] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0911 10:58:29.105428 2222784 system_pods.go:61] "snapshot-controller-58dbcc7b99-9f7nb" [0aed7656-2dfe-4ac7-ad14-ab43a08a531f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0911 10:58:29.105433 2222784 system_pods.go:61] "storage-provisioner" [a512a348-5ded-427c-886d-f1ea3077d8ad] Running
	I0911 10:58:29.105439 2222784 system_pods.go:61] "tiller-deploy-7b677967b9-dtz9n" [871f81ec-dd78-4aa4-89e9-5b99419aa8d5] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0911 10:58:29.105449 2222784 system_pods.go:74] duration metric: took 165.906891ms to wait for pod list to return data ...
	I0911 10:58:29.105460 2222784 default_sa.go:34] waiting for default service account to be created ...
	I0911 10:58:29.290200 2222784 default_sa.go:45] found service account: "default"
	I0911 10:58:29.290228 2222784 default_sa.go:55] duration metric: took 184.762583ms for default service account to be created ...
	I0911 10:58:29.290238 2222784 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 10:58:29.293354 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:29.476612 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:29.501275 2222784 system_pods.go:86] 17 kube-system pods found
	I0911 10:58:29.501305 2222784 system_pods.go:89] "coredns-5dd5756b68-2cg8c" [a229e351-155b-4d57-9746-e272bb98598b] Running
	I0911 10:58:29.501314 2222784 system_pods.go:89] "csi-hostpath-attacher-0" [245e9000-d196-429f-bf8a-ecced1fb4a71] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0911 10:58:29.501322 2222784 system_pods.go:89] "csi-hostpath-resizer-0" [62c4130b-1a92-424a-a665-557da4d3f75b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0911 10:58:29.501330 2222784 system_pods.go:89] "csi-hostpathplugin-nwdhc" [239b8e34-6457-4c49-8ad7-1947faae7550] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0911 10:58:29.501337 2222784 system_pods.go:89] "etcd-addons-554886" [de10e470-2588-4a7f-8e8e-8c84386ee6c5] Running
	I0911 10:58:29.501342 2222784 system_pods.go:89] "kube-apiserver-addons-554886" [c8aff2d0-df06-48cd-a21b-e1b060e3be2d] Running
	I0911 10:58:29.501347 2222784 system_pods.go:89] "kube-controller-manager-addons-554886" [1480e1eb-ad72-4c18-a9a8-a2528659fbf1] Running
	I0911 10:58:29.501355 2222784 system_pods.go:89] "kube-ingress-dns-minikube" [3715ae8a-f6d7-4bfc-b92c-a3586056893e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0911 10:58:29.501359 2222784 system_pods.go:89] "kube-proxy-96wzg" [0655ce43-1406-45df-96a8-df0f9f378891] Running
	I0911 10:58:29.501367 2222784 system_pods.go:89] "kube-scheduler-addons-554886" [b2c82861-60fb-45da-8a82-d487c1c1301c] Running
	I0911 10:58:29.501374 2222784 system_pods.go:89] "metrics-server-7c66d45ddc-7krqz" [68915a10-f10d-4296-8a14-8c21f7f71a42] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 10:58:29.501383 2222784 system_pods.go:89] "registry-proxy-lmsgk" [c3d6d669-7454-4529-b9ac-06abb4face91] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0911 10:58:29.501392 2222784 system_pods.go:89] "registry-t6754" [8531b6ac-003f-4a6d-aab4-67819497ab11] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0911 10:58:29.501406 2222784 system_pods.go:89] "snapshot-controller-58dbcc7b99-2nql9" [e0c9b597-80fb-4724-8eef-0e970bed2638] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0911 10:58:29.501420 2222784 system_pods.go:89] "snapshot-controller-58dbcc7b99-9f7nb" [0aed7656-2dfe-4ac7-ad14-ab43a08a531f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0911 10:58:29.501430 2222784 system_pods.go:89] "storage-provisioner" [a512a348-5ded-427c-886d-f1ea3077d8ad] Running
	I0911 10:58:29.501439 2222784 system_pods.go:89] "tiller-deploy-7b677967b9-dtz9n" [871f81ec-dd78-4aa4-89e9-5b99419aa8d5] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0911 10:58:29.501448 2222784 system_pods.go:126] duration metric: took 211.204671ms to wait for k8s-apps to be running ...
	I0911 10:58:29.501456 2222784 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 10:58:29.501503 2222784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 10:58:29.502304 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:29.504743 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:29.541148 2222784 system_svc.go:56] duration metric: took 39.676758ms WaitForService to wait for kubelet.
	I0911 10:58:29.541181 2222784 kubeadm.go:581] duration metric: took 42.289590939s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 10:58:29.541203 2222784 node_conditions.go:102] verifying NodePressure condition ...
	I0911 10:58:29.693461 2222784 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 10:58:29.693524 2222784 node_conditions.go:123] node cpu capacity is 2
	I0911 10:58:29.693537 2222784 node_conditions.go:105] duration metric: took 152.329102ms to run NodePressure ...
	I0911 10:58:29.693549 2222784 start.go:228] waiting for startup goroutines ...
	I0911 10:58:29.793593 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:29.978080 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:30.002479 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:30.004780 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:30.294161 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:30.477979 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:30.504947 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:30.506070 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:30.793269 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:30.983549 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:31.004392 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:31.006805 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:31.297236 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:31.481205 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:31.502879 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:31.506105 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:31.792896 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:31.980842 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:32.005072 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:32.005651 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:32.292417 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:32.478115 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:32.503903 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:32.504714 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:32.793172 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:32.983182 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:33.003821 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:33.006789 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:33.293041 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:33.476429 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:33.512778 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:33.527716 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:33.792961 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:33.977328 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:34.012931 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:34.013021 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:34.293032 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:34.486944 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:34.505218 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:34.506172 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:34.794211 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:35.004376 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:35.008955 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:35.009333 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:35.300562 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:35.476670 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:35.517252 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:35.521368 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:35.816780 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:35.982509 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:36.010647 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:36.010947 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:36.291941 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:36.480064 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:36.502805 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:36.503067 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:36.796300 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:36.978917 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:37.013632 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:37.014536 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:37.292617 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:37.478443 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:37.502918 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:37.504278 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:37.793530 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:37.977376 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:38.002021 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:38.002930 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:38.292483 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:38.480264 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:38.503175 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:38.504657 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:38.797288 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:38.976221 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:39.002538 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:39.002650 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:39.292426 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:39.476947 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:39.501995 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:39.504840 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:39.796146 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:39.976606 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:40.005642 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:40.006151 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:40.293525 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:40.476611 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:40.502590 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:40.503038 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:40.798077 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:40.978246 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:41.005255 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:41.005850 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:41.293029 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:41.479555 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:41.502758 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:41.503899 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:41.793294 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:41.977754 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:42.002177 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:42.003760 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:42.297137 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:42.478287 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:42.501747 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:42.502433 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:42.794090 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:42.977688 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:43.002051 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:43.003460 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:43.825217 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:43.828590 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:43.828941 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:43.829809 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:43.833317 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:43.977042 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:44.003823 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:44.004884 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:44.293394 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:44.486131 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:44.505203 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:44.507349 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:44.792672 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:44.979384 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:45.009570 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:45.009920 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:45.292771 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:45.482344 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:45.507706 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:45.510005 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:45.793567 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:46.128762 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:46.129250 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:46.129869 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:46.293243 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:46.477426 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:46.509691 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:46.512579 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:46.792585 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:46.977471 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:47.004827 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:47.008931 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:47.293670 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:47.476567 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:47.533365 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:47.564234 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:47.792362 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:47.976764 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:48.002012 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:48.003955 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:48.292368 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:48.476545 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:48.502173 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:48.502355 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:48.792930 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:48.983377 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:49.006469 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:49.007210 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:49.292495 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:49.476506 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:49.501753 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:49.503496 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:49.794704 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:49.983728 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:50.009481 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:50.011115 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:50.292701 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:50.477042 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:50.506272 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:50.506302 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:50.793004 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:50.979514 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:51.003402 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:51.003473 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:51.292987 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:51.481355 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:51.501918 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:51.503713 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:51.801531 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:51.977745 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:52.008823 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:52.011368 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:52.292244 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:52.476625 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:52.505407 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:52.506921 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:52.792204 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:52.976068 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:53.021505 2222784 kapi.go:107] duration metric: took 56.569029474s to wait for kubernetes.io/minikube-addons=registry ...
	I0911 10:58:53.028311 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:53.445075 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:53.477083 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:53.505638 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:53.792522 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:53.976089 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:54.004626 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:54.294638 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:54.476071 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:54.507622 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:54.793064 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:54.978086 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:55.043957 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:55.325601 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:55.477648 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:55.503077 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:55.792874 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:55.978193 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:56.002323 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:56.294506 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:56.476289 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:56.501898 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:56.792960 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:56.981556 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:57.001984 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:57.294188 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:57.477721 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:57.501617 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:57.792783 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:57.980420 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:58.002730 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:58.293201 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:58.482942 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:58.502805 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:59.026180 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:59.026521 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:59.027719 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:59.293070 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:59.477702 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:59.502056 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:59.792476 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:59.976725 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:00.005856 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:00.293244 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:00.478143 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:00.502850 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:00.792957 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:00.984934 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:01.019036 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:01.292140 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:01.493103 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:01.507925 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:02.074548 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:02.075290 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:02.080308 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:02.294010 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:02.477965 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:02.505401 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:02.793203 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:02.980572 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:03.005146 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:03.293313 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:03.476707 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:03.502038 2222784 kapi.go:107] duration metric: took 1m7.046645516s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0911 10:59:03.792441 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:03.978212 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:04.292436 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:04.476254 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:04.793962 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:04.976687 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:05.294289 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:05.530106 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:05.792985 2222784 kapi.go:107] duration metric: took 1m6.069455368s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0911 10:59:05.795155 2222784 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-554886 cluster.
	I0911 10:59:05.796953 2222784 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0911 10:59:05.798773 2222784 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0911 10:59:05.978955 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:06.478853 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:06.977520 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:07.478180 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:07.977906 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:08.476336 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:08.978846 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:09.688109 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:09.977490 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:10.476562 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:10.976600 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:11.477286 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:11.976566 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:12.477230 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:12.977224 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:13.480541 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:13.980462 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:14.477396 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:14.977193 2222784 kapi.go:107] duration metric: took 1m17.590068667s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0911 10:59:14.979290 2222784 out.go:177] * Enabled addons: default-storageclass, cloud-spanner, inspektor-gadget, ingress-dns, storage-provisioner, helm-tiller, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0911 10:59:14.980804 2222784 addons.go:502] enable addons completed in 1m27.930558594s: enabled=[default-storageclass cloud-spanner inspektor-gadget ingress-dns storage-provisioner helm-tiller metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0911 10:59:14.980863 2222784 start.go:233] waiting for cluster config update ...
	I0911 10:59:14.980893 2222784 start.go:242] writing updated cluster config ...
	I0911 10:59:14.981239 2222784 ssh_runner.go:195] Run: rm -f paused
	I0911 10:59:15.039922 2222784 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 10:59:15.042190 2222784 out.go:177] * Done! kubectl is now configured to use "addons-554886" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 10:57:03 UTC, ends at Mon 2023-09-11 11:02:05 UTC. --
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.488356348Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d03e6250-cd70-479e-be24-d7a1bafdaf2f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.488674272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a494b28c46254214ae0d1f81bfce7f8439fc88d90b6250c9f517cb4503161b,PodSandboxId:d0103c0ad5b11fea2e9ac37cd31cd09d6265fcc738b1cf911cb369cf073d00e9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430118178446676,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-ws6wh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16ec07a5-46af-46e6-91ec-7762771ecca8,},Annotations:map[string]string{io.kubernetes.container.hash: b612ceec,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernet
es.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bfd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,
State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provision
er@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087
d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8
c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d846569c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Imag
eSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f7025
32592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12
288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d03e6250-cd70-479e-be24-d7a1bafdaf2f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.526951231Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c20d9e4e-1a2b-49eb-8b38-49f6a9dc19e7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.527040808Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c20d9e4e-1a2b-49eb-8b38-49f6a9dc19e7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.527341208Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a494b28c46254214ae0d1f81bfce7f8439fc88d90b6250c9f517cb4503161b,PodSandboxId:d0103c0ad5b11fea2e9ac37cd31cd09d6265fcc738b1cf911cb369cf073d00e9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430118178446676,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-ws6wh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16ec07a5-46af-46e6-91ec-7762771ecca8,},Annotations:map[string]string{io.kubernetes.container.hash: b612ceec,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernet
es.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bfd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,
State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provision
er@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087
d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8
c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d846569c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Imag
eSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f7025
32592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12
288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c20d9e4e-1a2b-49eb-8b38-49f6a9dc19e7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.565926335Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=18abaca0-5fed-4f68-b058-0d9a017bdcc0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.566030503Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=18abaca0-5fed-4f68-b058-0d9a017bdcc0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.566336587Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a494b28c46254214ae0d1f81bfce7f8439fc88d90b6250c9f517cb4503161b,PodSandboxId:d0103c0ad5b11fea2e9ac37cd31cd09d6265fcc738b1cf911cb369cf073d00e9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430118178446676,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-ws6wh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16ec07a5-46af-46e6-91ec-7762771ecca8,},Annotations:map[string]string{io.kubernetes.container.hash: b612ceec,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernet
es.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bfd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,
State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provision
er@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087
d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8
c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d846569c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Imag
eSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f7025
32592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12
288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=18abaca0-5fed-4f68-b058-0d9a017bdcc0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.602314693Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=583e1ad4-ef43-40c6-84ce-3694ed266f5a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.602383487Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=583e1ad4-ef43-40c6-84ce-3694ed266f5a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.602824248Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a494b28c46254214ae0d1f81bfce7f8439fc88d90b6250c9f517cb4503161b,PodSandboxId:d0103c0ad5b11fea2e9ac37cd31cd09d6265fcc738b1cf911cb369cf073d00e9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430118178446676,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-ws6wh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16ec07a5-46af-46e6-91ec-7762771ecca8,},Annotations:map[string]string{io.kubernetes.container.hash: b612ceec,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernet
es.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bfd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,
State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provision
er@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087
d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8
c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d846569c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Imag
eSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f7025
32592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12
288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=583e1ad4-ef43-40c6-84ce-3694ed266f5a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.638134648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7251cfee-e6e1-495d-807c-f23b65120a32 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.638205489Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7251cfee-e6e1-495d-807c-f23b65120a32 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.638532933Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a494b28c46254214ae0d1f81bfce7f8439fc88d90b6250c9f517cb4503161b,PodSandboxId:d0103c0ad5b11fea2e9ac37cd31cd09d6265fcc738b1cf911cb369cf073d00e9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430118178446676,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-ws6wh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16ec07a5-46af-46e6-91ec-7762771ecca8,},Annotations:map[string]string{io.kubernetes.container.hash: b612ceec,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernet
es.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bfd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,
State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provision
er@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087
d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8
c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d846569c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Imag
eSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f7025
32592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12
288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7251cfee-e6e1-495d-807c-f23b65120a32 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.672879599Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=362bb7cd-f318-4be7-be1d-8fbdee9ee314 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.672947239Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=362bb7cd-f318-4be7-be1d-8fbdee9ee314 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.673388339Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a494b28c46254214ae0d1f81bfce7f8439fc88d90b6250c9f517cb4503161b,PodSandboxId:d0103c0ad5b11fea2e9ac37cd31cd09d6265fcc738b1cf911cb369cf073d00e9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430118178446676,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-ws6wh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16ec07a5-46af-46e6-91ec-7762771ecca8,},Annotations:map[string]string{io.kubernetes.container.hash: b612ceec,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernet
es.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bfd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,
State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provision
er@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087
d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8
c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d846569c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Imag
eSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f7025
32592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12
288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=362bb7cd-f318-4be7-be1d-8fbdee9ee314 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.710336981Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6ea6c887-65ee-4ecd-aea1-86a677abf61e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.710487938Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6ea6c887-65ee-4ecd-aea1-86a677abf61e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.710867931Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a494b28c46254214ae0d1f81bfce7f8439fc88d90b6250c9f517cb4503161b,PodSandboxId:d0103c0ad5b11fea2e9ac37cd31cd09d6265fcc738b1cf911cb369cf073d00e9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430118178446676,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-ws6wh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16ec07a5-46af-46e6-91ec-7762771ecca8,},Annotations:map[string]string{io.kubernetes.container.hash: b612ceec,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernet
es.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bfd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,
State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provision
er@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087
d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8
c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d846569c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Imag
eSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f7025
32592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12
288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6ea6c887-65ee-4ecd-aea1-86a677abf61e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.745188243Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c9cf6731-2020-47c9-9fb7-51d870087d03 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.745267380Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c9cf6731-2020-47c9-9fb7-51d870087d03 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.745613244Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:a1a494b28c46254214ae0d1f81bfce7f8439fc88d90b6250c9f517cb4503161b,PodSandboxId:d0103c0ad5b11fea2e9ac37cd31cd09d6265fcc738b1cf911cb369cf073d00e9,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430118178446676,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-ws6wh,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 16ec07a5-46af-46e6-91ec-7762771ecca8,},Annotations:map[string]string{io.kubernetes.container.hash: b612ceec,io.kubernetes.conta
iner.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernet
es.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.u
id: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bfd,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name
: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,
State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@s
ha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provision
er@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087
d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8
c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d846569c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&Imag
eSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f7025
32592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12
288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c9cf6731-2020-47c9-9fb7-51d870087d03 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.761029268Z" level=debug msg="Request: &ExecSyncRequest{ContainerId:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,Cmd:[/bin/gadgettracermanager -liveness],Timeout:2,}" file="go-grpc-middleware/chain.go:25" id=c79a9fe2-2d43-4566-867c-91eb2b1b6dc6 name=/runtime.v1.RuntimeService/ExecSync
	Sep 11 11:02:05 addons-554886 crio[718]: time="2023-09-11 11:02:05.761056245Z" level=debug msg="Request: &ExecSyncRequest{ContainerId:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,Cmd:[/bin/gadgettracermanager -liveness],Timeout:2,}" file="go-grpc-middleware/chain.go:25" id=4f097f51-d114-4968-9f2c-f67f434180e9 name=/runtime.v1.RuntimeService/ExecSync
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                               CREATED             STATE               NAME                      ATTEMPT             POD ID
	a1a494b28c462       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb             7 seconds ago       Running             hello-world-app           0                   d0103c0ad5b11
	e6f1300f92f92       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                     2 minutes ago       Running             nginx                     0                   1c0b637d114ac
	a04896a1d5423       ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552               2 minutes ago       Running             headlamp                  0                   b514d60c86df5
	95953c2448d53       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06        3 minutes ago       Running             gcp-auth                  0                   9740261cfaa9f
	9708c28af0643       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7   3 minutes ago       Running             gadget                    0                   d1e2db8fa59e2
	04265140b079b       7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0                                                    3 minutes ago       Exited              patch                     0                   0fee506f67358
	8666074a25ec4       7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0                                                    3 minutes ago       Exited              create                    0                   d41f1c31726c7
	199992096f96d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                    3 minutes ago       Running             storage-provisioner       0                   80d7317a52cb1
	951a4e6a74345       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                                    4 minutes ago       Running             kube-proxy                0                   1be58301fe404
	c6f785152f05d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                    4 minutes ago       Running             coredns                   0                   f5e1ab94d36a1
	b85b6c4269e32       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                                    4 minutes ago       Running             kube-scheduler            0                   d2777b0cf12d1
	c5bfc35139dce       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                    4 minutes ago       Running             etcd                      0                   9597f607cedfe
	77cce1e0548e5       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                                    4 minutes ago       Running             kube-apiserver            0                   882038185b326
	81d35d166d610       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                                    4 minutes ago       Running             kube-controller-manager   0                   e968bf9990e35
	
	* 
	* ==> coredns [c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8] <==
	* [INFO] 10.244.0.8:45290 - 41373 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000213146s
	[INFO] 10.244.0.8:44476 - 34673 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000129839s
	[INFO] 10.244.0.8:44476 - 24687 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000146277s
	[INFO] 10.244.0.8:43353 - 23776 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000225767s
	[INFO] 10.244.0.8:43353 - 14562 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100048s
	[INFO] 10.244.0.8:38377 - 22679 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00013966s
	[INFO] 10.244.0.8:38377 - 6549 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000270582s
	[INFO] 10.244.0.8:38525 - 6914 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00009834s
	[INFO] 10.244.0.8:38525 - 14588 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000102921s
	[INFO] 10.244.0.8:37652 - 28811 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076626s
	[INFO] 10.244.0.8:37652 - 49550 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00035517s
	[INFO] 10.244.0.8:47464 - 31265 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068238s
	[INFO] 10.244.0.8:47464 - 37666 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089247s
	[INFO] 10.244.0.8:46778 - 43950 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000075552s
	[INFO] 10.244.0.8:46778 - 52652 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000085759s
	[INFO] 10.244.0.19:59535 - 64363 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000336353s
	[INFO] 10.244.0.19:40079 - 11444 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000218574s
	[INFO] 10.244.0.19:47190 - 17013 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000317521s
	[INFO] 10.244.0.19:41944 - 41465 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000165872s
	[INFO] 10.244.0.19:33585 - 31457 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129588s
	[INFO] 10.244.0.19:60703 - 57198 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115098s
	[INFO] 10.244.0.19:54182 - 29383 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.00056064s
	[INFO] 10.244.0.19:43629 - 22789 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001234053s
	[INFO] 10.244.0.21:42936 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001327073s
	[INFO] 10.244.0.21:45843 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128616s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-554886
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-554886
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=addons-554886
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T10_57_35_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-554886
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 10:57:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-554886
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:02:00 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:00:07 +0000   Mon, 11 Sep 2023 10:57:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:00:07 +0000   Mon, 11 Sep 2023 10:57:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:00:07 +0000   Mon, 11 Sep 2023 10:57:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:00:07 +0000   Mon, 11 Sep 2023 10:57:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    addons-554886
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 698c538a33834a8798df0fdc57bfacd9
	  System UUID:                698c538a-3383-4a87-98df-0fdc57bfacd9
	  Boot ID:                    057dd6fa-4fdc-43cb-a756-ab25caae2723
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-ws6wh         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m35s
	  gadget                      gadget-9pxcq                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m11s
	  gcp-auth                    gcp-auth-d4c87556c-dc54c                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m7s
	  headlamp                    headlamp-699c48fb74-8w9jw                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m50s
	  kube-system                 coredns-5dd5756b68-2cg8c                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m19s
	  kube-system                 etcd-addons-554886                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-apiserver-addons-554886             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m32s
	  kube-system                 kube-controller-manager-addons-554886    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 kube-proxy-96wzg                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  kube-system                 kube-scheduler-addons-554886             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m31s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (4%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 3m59s                  kube-proxy       
	  Normal  Starting                 4m41s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m41s (x8 over 4m41s)  kubelet          Node addons-554886 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s (x8 over 4m41s)  kubelet          Node addons-554886 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s (x7 over 4m41s)  kubelet          Node addons-554886 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m41s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m32s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  4m32s                  kubelet          Node addons-554886 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m32s                  kubelet          Node addons-554886 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m32s                  kubelet          Node addons-554886 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m32s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m31s                  kubelet          Node addons-554886 status is now: NodeReady
	  Normal  RegisteredNode           4m20s                  node-controller  Node addons-554886 event: Registered Node addons-554886 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.135166] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.743937] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep11 10:57] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141524] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.073117] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.156724] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.109352] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.154889] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.112905] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.223172] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[ +10.482421] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +9.303062] systemd-fstab-generator[1246]: Ignoring "noauto" for root device
	[ +25.570784] kauditd_printk_skb: 54 callbacks suppressed
	[Sep11 10:58] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.439133] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.832510] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.732330] kauditd_printk_skb: 16 callbacks suppressed
	[Sep11 10:59] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.243922] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.901711] kauditd_printk_skb: 25 callbacks suppressed
	[Sep11 11:00] kauditd_printk_skb: 10 callbacks suppressed
	[Sep11 11:02] kauditd_printk_skb: 7 callbacks suppressed
	
	* 
	* ==> etcd [c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a78d89d4cd5d45] <==
	* {"level":"warn","ts":"2023-09-11T10:59:02.068046Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.677769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T10:59:02.068102Z","caller":"traceutil/trace.go:171","msg":"trace[1101319233] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:1047; }","duration":"228.738572ms","start":"2023-09-11T10:59:01.839355Z","end":"2023-09-11T10:59:02.068094Z","steps":["trace[1101319233] 'agreement among raft nodes before linearized reading'  (duration: 228.59701ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:09.672269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.412209ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78420"}
	{"level":"info","ts":"2023-09-11T10:59:09.672336Z","caller":"traceutil/trace.go:171","msg":"trace[1446574203] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1098; }","duration":"198.496608ms","start":"2023-09-11T10:59:09.47383Z","end":"2023-09-11T10:59:09.672327Z","steps":["trace[1446574203] 'range keys from in-memory index tree'  (duration: 198.193599ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:20.443127Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"349.069634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T10:59:20.443216Z","caller":"traceutil/trace.go:171","msg":"trace[1132995439] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1155; }","duration":"349.173434ms","start":"2023-09-11T10:59:20.094026Z","end":"2023-09-11T10:59:20.443199Z","steps":["trace[1132995439] 'range keys from in-memory index tree'  (duration: 348.995194ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:20.443254Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T10:59:20.094012Z","time spent":"349.233897ms","remote":"127.0.0.1:36072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-09-11T10:59:20.443397Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.063128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:3110"}
	{"level":"info","ts":"2023-09-11T10:59:20.443445Z","caller":"traceutil/trace.go:171","msg":"trace[452450509] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:1155; }","duration":"345.110343ms","start":"2023-09-11T10:59:20.098328Z","end":"2023-09-11T10:59:20.443438Z","steps":["trace[452450509] 'range keys from in-memory index tree'  (duration: 344.977431ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:20.443474Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T10:59:20.098314Z","time spent":"345.15307ms","remote":"127.0.0.1:36118","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":3134,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2023-09-11T10:59:20.443699Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"342.308938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:79184"}
	{"level":"info","ts":"2023-09-11T10:59:20.44381Z","caller":"traceutil/trace.go:171","msg":"trace[727986588] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1155; }","duration":"342.420611ms","start":"2023-09-11T10:59:20.101382Z","end":"2023-09-11T10:59:20.443803Z","steps":["trace[727986588] 'range keys from in-memory index tree'  (duration: 342.195865ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:20.443833Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T10:59:20.10138Z","time spent":"342.446645ms","remote":"127.0.0.1:36118","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":17,"response size":79208,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2023-09-11T10:59:20.444097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"342.761335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:79184"}
	{"level":"info","ts":"2023-09-11T10:59:20.44415Z","caller":"traceutil/trace.go:171","msg":"trace[1359748218] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1155; }","duration":"342.817214ms","start":"2023-09-11T10:59:20.101327Z","end":"2023-09-11T10:59:20.444144Z","steps":["trace[1359748218] 'range keys from in-memory index tree'  (duration: 342.549712ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:20.44417Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T10:59:20.101315Z","time spent":"342.84914ms","remote":"127.0.0.1:36118","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":17,"response size":79208,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2023-09-11T10:59:23.550927Z","caller":"traceutil/trace.go:171","msg":"trace[26628167] linearizableReadLoop","detail":"{readStateIndex:1236; appliedIndex:1235; }","duration":"152.398431ms","start":"2023-09-11T10:59:23.398511Z","end":"2023-09-11T10:59:23.55091Z","steps":["trace[26628167] 'read index received'  (duration: 152.285318ms)","trace[26628167] 'applied index is now lower than readState.Index'  (duration: 112.186µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-11T10:59:23.551383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.869344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:79196"}
	{"level":"info","ts":"2023-09-11T10:59:23.551444Z","caller":"traceutil/trace.go:171","msg":"trace[754521536] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1200; }","duration":"152.950074ms","start":"2023-09-11T10:59:23.398485Z","end":"2023-09-11T10:59:23.551435Z","steps":["trace[754521536] 'agreement among raft nodes before linearized reading'  (duration: 152.728063ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T10:59:53.125671Z","caller":"traceutil/trace.go:171","msg":"trace[880503769] linearizableReadLoop","detail":"{readStateIndex:1397; appliedIndex:1396; }","duration":"200.355956ms","start":"2023-09-11T10:59:52.925291Z","end":"2023-09-11T10:59:53.125647Z","steps":["trace[880503769] 'read index received'  (duration: 200.212977ms)","trace[880503769] 'applied index is now lower than readState.Index'  (duration: 140.325µs)"],"step_count":2}
	{"level":"info","ts":"2023-09-11T10:59:53.126095Z","caller":"traceutil/trace.go:171","msg":"trace[1316770861] transaction","detail":"{read_only:false; response_revision:1352; number_of_response:1; }","duration":"457.11295ms","start":"2023-09-11T10:59:52.668967Z","end":"2023-09-11T10:59:53.12608Z","steps":["trace[1316770861] 'process raft request'  (duration: 456.587563ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:53.126338Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T10:59:52.668943Z","time spent":"457.2041ms","remote":"127.0.0.1:36136","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":484,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1334 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:421 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"warn","ts":"2023-09-11T10:59:53.126513Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.236566ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1113"}
	{"level":"info","ts":"2023-09-11T10:59:53.126543Z","caller":"traceutil/trace.go:171","msg":"trace[889096573] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1352; }","duration":"201.261328ms","start":"2023-09-11T10:59:52.925265Z","end":"2023-09-11T10:59:53.126526Z","steps":["trace[889096573] 'agreement among raft nodes before linearized reading'  (duration: 201.205025ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:00:23.330453Z","caller":"traceutil/trace.go:171","msg":"trace[900025127] transaction","detail":"{read_only:false; response_revision:1530; number_of_response:1; }","duration":"117.990909ms","start":"2023-09-11T11:00:23.212449Z","end":"2023-09-11T11:00:23.33044Z","steps":["trace[900025127] 'process raft request'  (duration: 117.512265ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd] <==
	* 2023/09/11 10:59:05 GCP Auth Webhook started!
	2023/09/11 10:59:16 Ready to marshal response ...
	2023/09/11 10:59:16 Ready to write response ...
	2023/09/11 10:59:16 Ready to marshal response ...
	2023/09/11 10:59:16 Ready to write response ...
	2023/09/11 10:59:16 Ready to marshal response ...
	2023/09/11 10:59:16 Ready to write response ...
	2023/09/11 10:59:25 Ready to marshal response ...
	2023/09/11 10:59:25 Ready to write response ...
	2023/09/11 10:59:27 Ready to marshal response ...
	2023/09/11 10:59:27 Ready to write response ...
	2023/09/11 10:59:31 Ready to marshal response ...
	2023/09/11 10:59:31 Ready to write response ...
	2023/09/11 10:59:49 Ready to marshal response ...
	2023/09/11 10:59:49 Ready to write response ...
	2023/09/11 11:00:06 Ready to marshal response ...
	2023/09/11 11:00:06 Ready to write response ...
	2023/09/11 11:01:54 Ready to marshal response ...
	2023/09/11 11:01:54 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  11:02:06 up 5 min,  0 users,  load average: 0.70, 1.65, 0.86
	Linux addons-554886 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067] <==
	* I0911 11:00:24.187463       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0911 11:00:24.187576       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0911 11:00:24.190433       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0911 11:00:24.190523       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0911 11:00:24.208538       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0911 11:00:24.208662       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0911 11:00:24.214407       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0911 11:00:24.214599       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0911 11:00:24.232400       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0911 11:00:24.232467       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0911 11:00:24.245996       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0911 11:00:24.246092       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	E0911 11:00:24.270632       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0911 11:00:24.270802       1 controller.go:159] removing "v1beta1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0911 11:00:24.272515       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	E0911 11:00:24.273448       1 controller.go:159] removing "v1.snapshot.storage.k8s.io" from AggregationController failed with: resource not found
	W0911 11:00:25.233595       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0911 11:00:25.246975       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0911 11:00:25.262600       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0911 11:00:43.165341       1 handler_proxy.go:137] error resolving kube-system/metrics-server: service "metrics-server" not found
	W0911 11:00:43.165396       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 11:00:43.165449       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 11:00:43.165458       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 11:01:55.219678       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.37.160"}
	
	* 
	* ==> kube-controller-manager [81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3] <==
	* W0911 11:00:47.394015       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:00:47.394074       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0911 11:01:04.007054       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:01:04.007194       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0911 11:01:07.404814       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:01:07.404950       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0911 11:01:07.977288       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:01:07.977399       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0911 11:01:36.628111       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:01:36.628310       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0911 11:01:37.012274       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:01:37.012345       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0911 11:01:50.351413       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0911 11:01:50.351491       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0911 11:01:54.900236       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0911 11:01:54.966008       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-ws6wh"
	I0911 11:01:54.980594       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="81.511594ms"
	I0911 11:01:55.010963       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="30.255838ms"
	I0911 11:01:55.043814       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="32.740083ms"
	I0911 11:01:55.043982       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="55.695µs"
	I0911 11:01:57.858969       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0911 11:01:57.870869       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="5.456µs"
	I0911 11:01:57.876875       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0911 11:01:58.964543       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="12.728023ms"
	I0911 11:01:58.964651       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="40.335µs"
	
	* 
	* ==> kube-proxy [951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a] <==
	* I0911 10:58:05.121629       1 server_others.go:69] "Using iptables proxy"
	I0911 10:58:05.481267       1 node.go:141] Successfully retrieved node IP: 192.168.39.217
	I0911 10:58:06.272399       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 10:58:06.272674       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 10:58:06.279032       1 server_others.go:152] "Using iptables Proxier"
	I0911 10:58:06.279207       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 10:58:06.279428       1 server.go:846] "Version info" version="v1.28.1"
	I0911 10:58:06.279593       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 10:58:06.280424       1 config.go:188] "Starting service config controller"
	I0911 10:58:06.280492       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 10:58:06.280524       1 config.go:97] "Starting endpoint slice config controller"
	I0911 10:58:06.280540       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 10:58:06.281158       1 config.go:315] "Starting node config controller"
	I0911 10:58:06.281201       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 10:58:06.380642       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 10:58:06.380838       1 shared_informer.go:318] Caches are synced for service config
	I0911 10:58:06.392180       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b85b6c4269e32b857d95aee174cbf7d846569c0e16351c40e186352008b53f35] <==
	* W0911 10:57:31.411785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 10:57:31.411796       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 10:57:31.414031       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 10:57:31.414093       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 10:57:31.414311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0911 10:57:31.414352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0911 10:57:32.257890       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0911 10:57:32.257947       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0911 10:57:32.316611       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 10:57:32.316665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0911 10:57:32.505575       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 10:57:32.505630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 10:57:32.611911       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 10:57:32.612000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0911 10:57:32.613376       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0911 10:57:32.613440       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0911 10:57:32.643133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 10:57:32.643184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0911 10:57:32.653880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0911 10:57:32.653989       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0911 10:57:32.743850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0911 10:57:32.743941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0911 10:57:32.912934       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 10:57:32.912988       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0911 10:57:34.908484       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 10:57:03 UTC, ends at Mon 2023-09-11 11:02:06 UTC. --
	Sep 11 11:01:54 addons-554886 kubelet[1253]: I0911 11:01:54.975125    1253 memory_manager.go:346] "RemoveStaleState removing state" podUID="6c31e7bf-2e52-4afd-88d9-3d2a22098f66" containerName="task-pv-container"
	Sep 11 11:01:55 addons-554886 kubelet[1253]: I0911 11:01:55.073971    1253 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/16ec07a5-46af-46e6-91ec-7762771ecca8-gcp-creds\") pod \"hello-world-app-5d77478584-ws6wh\" (UID: \"16ec07a5-46af-46e6-91ec-7762771ecca8\") " pod="default/hello-world-app-5d77478584-ws6wh"
	Sep 11 11:01:55 addons-554886 kubelet[1253]: I0911 11:01:55.074084    1253 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6w9lp\" (UniqueName: \"kubernetes.io/projected/16ec07a5-46af-46e6-91ec-7762771ecca8-kube-api-access-6w9lp\") pod \"hello-world-app-5d77478584-ws6wh\" (UID: \"16ec07a5-46af-46e6-91ec-7762771ecca8\") " pod="default/hello-world-app-5d77478584-ws6wh"
	Sep 11 11:01:56 addons-554886 kubelet[1253]: I0911 11:01:56.589166    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdsf6\" (UniqueName: \"kubernetes.io/projected/3715ae8a-f6d7-4bfc-b92c-a3586056893e-kube-api-access-qdsf6\") pod \"3715ae8a-f6d7-4bfc-b92c-a3586056893e\" (UID: \"3715ae8a-f6d7-4bfc-b92c-a3586056893e\") "
	Sep 11 11:01:56 addons-554886 kubelet[1253]: I0911 11:01:56.594464    1253 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3715ae8a-f6d7-4bfc-b92c-a3586056893e-kube-api-access-qdsf6" (OuterVolumeSpecName: "kube-api-access-qdsf6") pod "3715ae8a-f6d7-4bfc-b92c-a3586056893e" (UID: "3715ae8a-f6d7-4bfc-b92c-a3586056893e"). InnerVolumeSpecName "kube-api-access-qdsf6". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 11 11:01:56 addons-554886 kubelet[1253]: I0911 11:01:56.690406    1253 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-qdsf6\" (UniqueName: \"kubernetes.io/projected/3715ae8a-f6d7-4bfc-b92c-a3586056893e-kube-api-access-qdsf6\") on node \"addons-554886\" DevicePath \"\""
	Sep 11 11:01:56 addons-554886 kubelet[1253]: I0911 11:01:56.919786    1253 scope.go:117] "RemoveContainer" containerID="b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087"
	Sep 11 11:01:56 addons-554886 kubelet[1253]: I0911 11:01:56.967655    1253 scope.go:117] "RemoveContainer" containerID="b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087"
	Sep 11 11:01:56 addons-554886 kubelet[1253]: E0911 11:01:56.968523    1253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087\": container with ID starting with b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087 not found: ID does not exist" containerID="b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087"
	Sep 11 11:01:56 addons-554886 kubelet[1253]: I0911 11:01:56.968582    1253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087"} err="failed to get container status \"b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087\": rpc error: code = NotFound desc = could not find container \"b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087\": container with ID starting with b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087 not found: ID does not exist"
	Sep 11 11:01:58 addons-554886 kubelet[1253]: I0911 11:01:58.871464    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="3715ae8a-f6d7-4bfc-b92c-a3586056893e" path="/var/lib/kubelet/pods/3715ae8a-f6d7-4bfc-b92c-a3586056893e/volumes"
	Sep 11 11:01:58 addons-554886 kubelet[1253]: I0911 11:01:58.871977    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8af7ef82-090b-42cf-922e-dfbcdf88d182" path="/var/lib/kubelet/pods/8af7ef82-090b-42cf-922e-dfbcdf88d182/volumes"
	Sep 11 11:01:58 addons-554886 kubelet[1253]: I0911 11:01:58.872443    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="914fe891-c74b-4373-b3c8-c01d60957ad2" path="/var/lib/kubelet/pods/914fe891-c74b-4373-b3c8-c01d60957ad2/volumes"
	Sep 11 11:01:58 addons-554886 kubelet[1253]: I0911 11:01:58.951034    1253 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/hello-world-app-5d77478584-ws6wh" podStartSLOduration=3.288441936 podCreationTimestamp="2023-09-11 11:01:54 +0000 UTC" firstStartedPulling="2023-09-11 11:01:56.472197623 +0000 UTC m=+261.815131610" lastFinishedPulling="2023-09-11 11:01:58.134695035 +0000 UTC m=+263.477629022" observedRunningTime="2023-09-11 11:01:58.949860087 +0000 UTC m=+264.292794091" watchObservedRunningTime="2023-09-11 11:01:58.950939348 +0000 UTC m=+264.293873351"
	Sep 11 11:02:01 addons-554886 kubelet[1253]: I0911 11:02:01.226245    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02937856-34c0-4a63-9601-d8747d12123f-webhook-cert\") pod \"02937856-34c0-4a63-9601-d8747d12123f\" (UID: \"02937856-34c0-4a63-9601-d8747d12123f\") "
	Sep 11 11:02:01 addons-554886 kubelet[1253]: I0911 11:02:01.226292    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rj6sm\" (UniqueName: \"kubernetes.io/projected/02937856-34c0-4a63-9601-d8747d12123f-kube-api-access-rj6sm\") pod \"02937856-34c0-4a63-9601-d8747d12123f\" (UID: \"02937856-34c0-4a63-9601-d8747d12123f\") "
	Sep 11 11:02:01 addons-554886 kubelet[1253]: I0911 11:02:01.231856    1253 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/02937856-34c0-4a63-9601-d8747d12123f-kube-api-access-rj6sm" (OuterVolumeSpecName: "kube-api-access-rj6sm") pod "02937856-34c0-4a63-9601-d8747d12123f" (UID: "02937856-34c0-4a63-9601-d8747d12123f"). InnerVolumeSpecName "kube-api-access-rj6sm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 11 11:02:01 addons-554886 kubelet[1253]: I0911 11:02:01.232386    1253 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/02937856-34c0-4a63-9601-d8747d12123f-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "02937856-34c0-4a63-9601-d8747d12123f" (UID: "02937856-34c0-4a63-9601-d8747d12123f"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:02:01 addons-554886 kubelet[1253]: I0911 11:02:01.326975    1253 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/02937856-34c0-4a63-9601-d8747d12123f-webhook-cert\") on node \"addons-554886\" DevicePath \"\""
	Sep 11 11:02:01 addons-554886 kubelet[1253]: I0911 11:02:01.327118    1253 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-rj6sm\" (UniqueName: \"kubernetes.io/projected/02937856-34c0-4a63-9601-d8747d12123f-kube-api-access-rj6sm\") on node \"addons-554886\" DevicePath \"\""
	Sep 11 11:02:01 addons-554886 kubelet[1253]: I0911 11:02:01.951969    1253 scope.go:117] "RemoveContainer" containerID="c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7"
	Sep 11 11:02:01 addons-554886 kubelet[1253]: I0911 11:02:01.982466    1253 scope.go:117] "RemoveContainer" containerID="c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7"
	Sep 11 11:02:01 addons-554886 kubelet[1253]: E0911 11:02:01.983198    1253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7\": container with ID starting with c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7 not found: ID does not exist" containerID="c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7"
	Sep 11 11:02:01 addons-554886 kubelet[1253]: I0911 11:02:01.983280    1253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7"} err="failed to get container status \"c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7\": rpc error: code = NotFound desc = could not find container \"c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7\": container with ID starting with c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7 not found: ID does not exist"
	Sep 11 11:02:02 addons-554886 kubelet[1253]: I0911 11:02:02.871572    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="02937856-34c0-4a63-9601-d8747d12123f" path="/var/lib/kubelet/pods/02937856-34c0-4a63-9601-d8747d12123f/volumes"
	
	* 
	* ==> storage-provisioner [199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765] <==
	* I0911 10:58:08.458308       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 10:58:08.537261       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 10:58:08.537369       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 10:58:08.647030       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 10:58:08.663994       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-554886_4228418e-a4d3-4757-914a-1683fe81d9af!
	I0911 10:58:08.705532       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"be1f0266-cf3c-44e2-9f33-973c3042cab1", APIVersion:"v1", ResourceVersion:"834", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-554886_4228418e-a4d3-4757-914a-1683fe81d9af became leader
	I0911 10:58:09.088467       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-554886_4228418e-a4d3-4757-914a-1683fe81d9af!
	E0911 11:00:16.495354       1 controller.go:1050] claim "87aa3dd9-8d3a-468d-8d87-96d168034fc3" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-554886 -n addons-554886
helpers_test.go:261: (dbg) Run:  kubectl --context addons-554886 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (155.99s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (7.77s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-9pxcq" [40439794-e1e9-4402-af31-3eab9b7d98f8] Running
addons_test.go:814: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.031077464s
addons_test.go:817: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-554886
addons_test.go:817: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-554886: exit status 11 (444.796772ms)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: runc: sudo runc list -f json: Process exited with status 1
	stdout:
	
	stderr:
	time="2023-09-11T10:59:36Z" level=error msg="stat /run/runc/2c678309d87fb10769df41635ba8f6658b1657e92c6f5aee523134e0c3e8221c: no such file or directory"
	
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_07218961934993dd21acc63caaf1aa08873c018e_7.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:818: failed to disable inspektor-gadget addon: args "out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-554886" : exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-554886 -n addons-554886
helpers_test.go:244: <<< TestAddons/parallel/InspektorGadget FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/InspektorGadget]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-554886 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-554886 logs -n 25: (1.440961154s)
helpers_test.go:252: TestAddons/parallel/InspektorGadget logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-461050 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC |                     |
	|         | -p download-only-461050        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| start   | -o=json --download-only        | download-only-461050 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC |                     |
	|         | -p download-only-461050        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC | 11 Sep 23 10:56 UTC |
	| delete  | -p download-only-461050        | download-only-461050 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC | 11 Sep 23 10:56 UTC |
	| delete  | -p download-only-461050        | download-only-461050 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC | 11 Sep 23 10:56 UTC |
	| start   | --download-only -p             | binary-mirror-417783 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC |                     |
	|         | binary-mirror-417783           |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --binary-mirror                |                      |         |         |                     |                     |
	|         | http://127.0.0.1:34313         |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-417783        | binary-mirror-417783 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC | 11 Sep 23 10:56 UTC |
	| start   | -p addons-554886               | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC | 11 Sep 23 10:59 UTC |
	|         | --wait=true --memory=4000      |                      |         |         |                     |                     |
	|         | --alsologtostderr              |                      |         |         |                     |                     |
	|         | --addons=registry              |                      |         |         |                     |                     |
	|         | --addons=metrics-server        |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots       |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver   |                      |         |         |                     |                     |
	|         | --addons=gcp-auth              |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner         |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget      |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --addons=ingress               |                      |         |         |                     |                     |
	|         | --addons=ingress-dns           |                      |         |         |                     |                     |
	|         | --addons=helm-tiller           |                      |         |         |                     |                     |
	| addons  | enable headlamp                | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC | 11 Sep 23 10:59 UTC |
	|         | -p addons-554886               |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p       | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC | 11 Sep 23 10:59 UTC |
	|         | addons-554886                  |                      |         |         |                     |                     |
	| addons  | addons-554886 addons           | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC | 11 Sep 23 10:59 UTC |
	|         | disable metrics-server         |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1         |                      |         |         |                     |                     |
	| ip      | addons-554886 ip               | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC | 11 Sep 23 10:59 UTC |
	| addons  | addons-554886 addons disable   | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC | 11 Sep 23 10:59 UTC |
	|         | registry --alsologtostderr     |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | addons-554886 addons disable   | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC | 11 Sep 23 10:59 UTC |
	|         | helm-tiller --alsologtostderr  |                      |         |         |                     |                     |
	|         | -v=1                           |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p    | addons-554886        | jenkins | v1.31.2 | 11 Sep 23 10:59 UTC |                     |
	|         | addons-554886                  |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 10:56:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 10:56:49.579022 2222784 out.go:296] Setting OutFile to fd 1 ...
	I0911 10:56:49.579186 2222784 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 10:56:49.579196 2222784 out.go:309] Setting ErrFile to fd 2...
	I0911 10:56:49.579203 2222784 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 10:56:49.579424 2222784 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 10:56:49.580088 2222784 out.go:303] Setting JSON to false
	I0911 10:56:49.581079 2222784 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":232761,"bootTime":1694197049,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 10:56:49.581152 2222784 start.go:138] virtualization: kvm guest
	I0911 10:56:49.584066 2222784 out.go:177] * [addons-554886] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 10:56:49.585986 2222784 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 10:56:49.585931 2222784 notify.go:220] Checking for updates...
	I0911 10:56:49.587749 2222784 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 10:56:49.589559 2222784 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 10:56:49.591322 2222784 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 10:56:49.593117 2222784 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 10:56:49.595380 2222784 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 10:56:49.597333 2222784 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 10:56:49.632682 2222784 out.go:177] * Using the kvm2 driver based on user configuration
	I0911 10:56:49.634208 2222784 start.go:298] selected driver: kvm2
	I0911 10:56:49.634228 2222784 start.go:902] validating driver "kvm2" against <nil>
	I0911 10:56:49.634253 2222784 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 10:56:49.635286 2222784 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 10:56:49.635384 2222784 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 10:56:49.651187 2222784 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 10:56:49.651247 2222784 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 10:56:49.651482 2222784 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 10:56:49.651518 2222784 cni.go:84] Creating CNI manager for ""
	I0911 10:56:49.651530 2222784 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 10:56:49.651542 2222784 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 10:56:49.651550 2222784 start_flags.go:321] config:
	{Name:addons-554886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:addons-554886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISo
cket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 10:56:49.651679 2222784 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 10:56:49.653639 2222784 out.go:177] * Starting control plane node addons-554886 in cluster addons-554886
	I0911 10:56:49.655305 2222784 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 10:56:49.655347 2222784 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 10:56:49.655360 2222784 cache.go:57] Caching tarball of preloaded images
	I0911 10:56:49.655449 2222784 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 10:56:49.655463 2222784 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 10:56:49.655843 2222784 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/config.json ...
	I0911 10:56:49.655874 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/config.json: {Name:mkb9d47aea5b20199ee73d14d304ac7e99ccbda8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:56:49.656026 2222784 start.go:365] acquiring machines lock for addons-554886: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 10:56:49.656074 2222784 start.go:369] acquired machines lock for "addons-554886" in 31.701µs
	I0911 10:56:49.656115 2222784 start.go:93] Provisioning new machine with config: &{Name:addons-554886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:addons-554886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 10:56:49.656210 2222784 start.go:125] createHost starting for "" (driver="kvm2")
	I0911 10:56:49.658315 2222784 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0911 10:56:49.658480 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:56:49.658542 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:56:49.673999 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45385
	I0911 10:56:49.674546 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:56:49.675244 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:56:49.675271 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:56:49.675636 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:56:49.675864 2222784 main.go:141] libmachine: (addons-554886) Calling .GetMachineName
	I0911 10:56:49.676055 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:56:49.676240 2222784 start.go:159] libmachine.API.Create for "addons-554886" (driver="kvm2")
	I0911 10:56:49.676272 2222784 client.go:168] LocalClient.Create starting
	I0911 10:56:49.676357 2222784 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem
	I0911 10:56:49.810301 2222784 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem
	I0911 10:56:49.916295 2222784 main.go:141] libmachine: Running pre-create checks...
	I0911 10:56:49.916322 2222784 main.go:141] libmachine: (addons-554886) Calling .PreCreateCheck
	I0911 10:56:49.916981 2222784 main.go:141] libmachine: (addons-554886) Calling .GetConfigRaw
	I0911 10:56:49.917538 2222784 main.go:141] libmachine: Creating machine...
	I0911 10:56:49.917560 2222784 main.go:141] libmachine: (addons-554886) Calling .Create
	I0911 10:56:49.917795 2222784 main.go:141] libmachine: (addons-554886) Creating KVM machine...
	I0911 10:56:49.919242 2222784 main.go:141] libmachine: (addons-554886) DBG | found existing default KVM network
	I0911 10:56:49.920187 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:49.920013 2222816 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0000298b0}
	I0911 10:56:49.926251 2222784 main.go:141] libmachine: (addons-554886) DBG | trying to create private KVM network mk-addons-554886 192.168.39.0/24...
	I0911 10:56:50.003766 2222784 main.go:141] libmachine: (addons-554886) DBG | private KVM network mk-addons-554886 192.168.39.0/24 created
	I0911 10:56:50.003806 2222784 main.go:141] libmachine: (addons-554886) Setting up store path in /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886 ...
	I0911 10:56:50.003889 2222784 main.go:141] libmachine: (addons-554886) Building disk image from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0911 10:56:50.003935 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:50.003761 2222816 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 10:56:50.003973 2222784 main.go:141] libmachine: (addons-554886) Downloading /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0911 10:56:50.260017 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:50.259871 2222816 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa...
	I0911 10:56:50.381805 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:50.381599 2222816 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/addons-554886.rawdisk...
	I0911 10:56:50.381849 2222784 main.go:141] libmachine: (addons-554886) DBG | Writing magic tar header
	I0911 10:56:50.381866 2222784 main.go:141] libmachine: (addons-554886) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886 (perms=drwx------)
	I0911 10:56:50.381884 2222784 main.go:141] libmachine: (addons-554886) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines (perms=drwxr-xr-x)
	I0911 10:56:50.381893 2222784 main.go:141] libmachine: (addons-554886) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube (perms=drwxr-xr-x)
	I0911 10:56:50.381911 2222784 main.go:141] libmachine: (addons-554886) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273 (perms=drwxrwxr-x)
	I0911 10:56:50.381923 2222784 main.go:141] libmachine: (addons-554886) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0911 10:56:50.381938 2222784 main.go:141] libmachine: (addons-554886) DBG | Writing SSH key tar header
	I0911 10:56:50.381951 2222784 main.go:141] libmachine: (addons-554886) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0911 10:56:50.381970 2222784 main.go:141] libmachine: (addons-554886) Creating domain...
	I0911 10:56:50.381991 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:50.381729 2222816 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886 ...
	I0911 10:56:50.382011 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886
	I0911 10:56:50.382031 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines
	I0911 10:56:50.382053 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 10:56:50.382071 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273
	I0911 10:56:50.382081 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0911 10:56:50.382095 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home/jenkins
	I0911 10:56:50.382106 2222784 main.go:141] libmachine: (addons-554886) DBG | Checking permissions on dir: /home
	I0911 10:56:50.382115 2222784 main.go:141] libmachine: (addons-554886) DBG | Skipping /home - not owner
	I0911 10:56:50.383470 2222784 main.go:141] libmachine: (addons-554886) define libvirt domain using xml: 
	I0911 10:56:50.383500 2222784 main.go:141] libmachine: (addons-554886) <domain type='kvm'>
	I0911 10:56:50.383508 2222784 main.go:141] libmachine: (addons-554886)   <name>addons-554886</name>
	I0911 10:56:50.383513 2222784 main.go:141] libmachine: (addons-554886)   <memory unit='MiB'>4000</memory>
	I0911 10:56:50.383520 2222784 main.go:141] libmachine: (addons-554886)   <vcpu>2</vcpu>
	I0911 10:56:50.383525 2222784 main.go:141] libmachine: (addons-554886)   <features>
	I0911 10:56:50.383531 2222784 main.go:141] libmachine: (addons-554886)     <acpi/>
	I0911 10:56:50.383535 2222784 main.go:141] libmachine: (addons-554886)     <apic/>
	I0911 10:56:50.383541 2222784 main.go:141] libmachine: (addons-554886)     <pae/>
	I0911 10:56:50.383549 2222784 main.go:141] libmachine: (addons-554886)     
	I0911 10:56:50.383555 2222784 main.go:141] libmachine: (addons-554886)   </features>
	I0911 10:56:50.383563 2222784 main.go:141] libmachine: (addons-554886)   <cpu mode='host-passthrough'>
	I0911 10:56:50.383584 2222784 main.go:141] libmachine: (addons-554886)   
	I0911 10:56:50.383595 2222784 main.go:141] libmachine: (addons-554886)   </cpu>
	I0911 10:56:50.383636 2222784 main.go:141] libmachine: (addons-554886)   <os>
	I0911 10:56:50.383691 2222784 main.go:141] libmachine: (addons-554886)     <type>hvm</type>
	I0911 10:56:50.383707 2222784 main.go:141] libmachine: (addons-554886)     <boot dev='cdrom'/>
	I0911 10:56:50.383713 2222784 main.go:141] libmachine: (addons-554886)     <boot dev='hd'/>
	I0911 10:56:50.383719 2222784 main.go:141] libmachine: (addons-554886)     <bootmenu enable='no'/>
	I0911 10:56:50.383729 2222784 main.go:141] libmachine: (addons-554886)   </os>
	I0911 10:56:50.383737 2222784 main.go:141] libmachine: (addons-554886)   <devices>
	I0911 10:56:50.383745 2222784 main.go:141] libmachine: (addons-554886)     <disk type='file' device='cdrom'>
	I0911 10:56:50.383787 2222784 main.go:141] libmachine: (addons-554886)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/boot2docker.iso'/>
	I0911 10:56:50.383819 2222784 main.go:141] libmachine: (addons-554886)       <target dev='hdc' bus='scsi'/>
	I0911 10:56:50.383835 2222784 main.go:141] libmachine: (addons-554886)       <readonly/>
	I0911 10:56:50.383848 2222784 main.go:141] libmachine: (addons-554886)     </disk>
	I0911 10:56:50.383874 2222784 main.go:141] libmachine: (addons-554886)     <disk type='file' device='disk'>
	I0911 10:56:50.383889 2222784 main.go:141] libmachine: (addons-554886)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0911 10:56:50.383914 2222784 main.go:141] libmachine: (addons-554886)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/addons-554886.rawdisk'/>
	I0911 10:56:50.383932 2222784 main.go:141] libmachine: (addons-554886)       <target dev='hda' bus='virtio'/>
	I0911 10:56:50.383945 2222784 main.go:141] libmachine: (addons-554886)     </disk>
	I0911 10:56:50.383955 2222784 main.go:141] libmachine: (addons-554886)     <interface type='network'>
	I0911 10:56:50.383969 2222784 main.go:141] libmachine: (addons-554886)       <source network='mk-addons-554886'/>
	I0911 10:56:50.383980 2222784 main.go:141] libmachine: (addons-554886)       <model type='virtio'/>
	I0911 10:56:50.383993 2222784 main.go:141] libmachine: (addons-554886)     </interface>
	I0911 10:56:50.384009 2222784 main.go:141] libmachine: (addons-554886)     <interface type='network'>
	I0911 10:56:50.384023 2222784 main.go:141] libmachine: (addons-554886)       <source network='default'/>
	I0911 10:56:50.384035 2222784 main.go:141] libmachine: (addons-554886)       <model type='virtio'/>
	I0911 10:56:50.384045 2222784 main.go:141] libmachine: (addons-554886)     </interface>
	I0911 10:56:50.384057 2222784 main.go:141] libmachine: (addons-554886)     <serial type='pty'>
	I0911 10:56:50.384068 2222784 main.go:141] libmachine: (addons-554886)       <target port='0'/>
	I0911 10:56:50.384084 2222784 main.go:141] libmachine: (addons-554886)     </serial>
	I0911 10:56:50.384096 2222784 main.go:141] libmachine: (addons-554886)     <console type='pty'>
	I0911 10:56:50.384107 2222784 main.go:141] libmachine: (addons-554886)       <target type='serial' port='0'/>
	I0911 10:56:50.384119 2222784 main.go:141] libmachine: (addons-554886)     </console>
	I0911 10:56:50.384130 2222784 main.go:141] libmachine: (addons-554886)     <rng model='virtio'>
	I0911 10:56:50.384142 2222784 main.go:141] libmachine: (addons-554886)       <backend model='random'>/dev/random</backend>
	I0911 10:56:50.384163 2222784 main.go:141] libmachine: (addons-554886)     </rng>
	I0911 10:56:50.384175 2222784 main.go:141] libmachine: (addons-554886)     
	I0911 10:56:50.384186 2222784 main.go:141] libmachine: (addons-554886)     
	I0911 10:56:50.384199 2222784 main.go:141] libmachine: (addons-554886)   </devices>
	I0911 10:56:50.384211 2222784 main.go:141] libmachine: (addons-554886) </domain>
	I0911 10:56:50.384231 2222784 main.go:141] libmachine: (addons-554886) 
	I0911 10:56:50.389287 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:f6:ba:d8 in network default
	I0911 10:56:50.390121 2222784 main.go:141] libmachine: (addons-554886) Ensuring networks are active...
	I0911 10:56:50.390163 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:50.390924 2222784 main.go:141] libmachine: (addons-554886) Ensuring network default is active
	I0911 10:56:50.391557 2222784 main.go:141] libmachine: (addons-554886) Ensuring network mk-addons-554886 is active
	I0911 10:56:50.392078 2222784 main.go:141] libmachine: (addons-554886) Getting domain xml...
	I0911 10:56:50.392870 2222784 main.go:141] libmachine: (addons-554886) Creating domain...
	I0911 10:56:51.638833 2222784 main.go:141] libmachine: (addons-554886) Waiting to get IP...
	I0911 10:56:51.639727 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:51.640136 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:51.640205 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:51.640115 2222816 retry.go:31] will retry after 221.869338ms: waiting for machine to come up
	I0911 10:56:51.863778 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:51.864281 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:51.864313 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:51.864225 2222816 retry.go:31] will retry after 382.483832ms: waiting for machine to come up
	I0911 10:56:52.249137 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:52.249544 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:52.249568 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:52.249495 2222816 retry.go:31] will retry after 373.419457ms: waiting for machine to come up
	I0911 10:56:52.624135 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:52.624575 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:52.624605 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:52.624532 2222816 retry.go:31] will retry after 502.42247ms: waiting for machine to come up
	I0911 10:56:53.128372 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:53.128741 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:53.128769 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:53.128687 2222816 retry.go:31] will retry after 703.115816ms: waiting for machine to come up
	I0911 10:56:53.833765 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:53.834201 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:53.834234 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:53.834133 2222816 retry.go:31] will retry after 810.829781ms: waiting for machine to come up
	I0911 10:56:54.647009 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:54.647418 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:54.647450 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:54.647364 2222816 retry.go:31] will retry after 786.103123ms: waiting for machine to come up
	I0911 10:56:55.435063 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:55.435558 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:55.435586 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:55.435476 2222816 retry.go:31] will retry after 1.216968943s: waiting for machine to come up
	I0911 10:56:56.654297 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:56.654795 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:56.654826 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:56.654742 2222816 retry.go:31] will retry after 1.645693064s: waiting for machine to come up
	I0911 10:56:58.302914 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:58.303343 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:58.303368 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:58.303289 2222816 retry.go:31] will retry after 1.403118165s: waiting for machine to come up
	I0911 10:56:59.709826 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:56:59.710299 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:56:59.710350 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:56:59.710250 2222816 retry.go:31] will retry after 1.793989775s: waiting for machine to come up
	I0911 10:57:01.506125 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:01.506628 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:57:01.506695 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:57:01.506571 2222816 retry.go:31] will retry after 2.373189625s: waiting for machine to come up
	I0911 10:57:03.883358 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:03.883770 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:57:03.883806 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:57:03.883719 2222816 retry.go:31] will retry after 4.354927218s: waiting for machine to come up
	I0911 10:57:08.242958 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:08.243439 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find current IP address of domain addons-554886 in network mk-addons-554886
	I0911 10:57:08.243464 2222784 main.go:141] libmachine: (addons-554886) DBG | I0911 10:57:08.243420 2222816 retry.go:31] will retry after 3.80832799s: waiting for machine to come up
	I0911 10:57:12.055397 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.055843 2222784 main.go:141] libmachine: (addons-554886) Found IP for machine: 192.168.39.217
	I0911 10:57:12.055874 2222784 main.go:141] libmachine: (addons-554886) Reserving static IP address...
	I0911 10:57:12.055889 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has current primary IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.056292 2222784 main.go:141] libmachine: (addons-554886) DBG | unable to find host DHCP lease matching {name: "addons-554886", mac: "52:54:00:c7:87:82", ip: "192.168.39.217"} in network mk-addons-554886
	I0911 10:57:12.151321 2222784 main.go:141] libmachine: (addons-554886) DBG | Getting to WaitForSSH function...
	I0911 10:57:12.151359 2222784 main.go:141] libmachine: (addons-554886) Reserved static IP address: 192.168.39.217
	I0911 10:57:12.151374 2222784 main.go:141] libmachine: (addons-554886) Waiting for SSH to be available...
	I0911 10:57:12.154477 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.155074 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.155110 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.155252 2222784 main.go:141] libmachine: (addons-554886) DBG | Using SSH client type: external
	I0911 10:57:12.155273 2222784 main.go:141] libmachine: (addons-554886) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa (-rw-------)
	I0911 10:57:12.155320 2222784 main.go:141] libmachine: (addons-554886) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.217 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 10:57:12.155340 2222784 main.go:141] libmachine: (addons-554886) DBG | About to run SSH command:
	I0911 10:57:12.155349 2222784 main.go:141] libmachine: (addons-554886) DBG | exit 0
	I0911 10:57:12.248936 2222784 main.go:141] libmachine: (addons-554886) DBG | SSH cmd err, output: <nil>: 
	I0911 10:57:12.249211 2222784 main.go:141] libmachine: (addons-554886) KVM machine creation complete!
	I0911 10:57:12.249492 2222784 main.go:141] libmachine: (addons-554886) Calling .GetConfigRaw
	I0911 10:57:12.250107 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:12.250333 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:12.250585 2222784 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0911 10:57:12.250619 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:12.252102 2222784 main.go:141] libmachine: Detecting operating system of created instance...
	I0911 10:57:12.252122 2222784 main.go:141] libmachine: Waiting for SSH to be available...
	I0911 10:57:12.252129 2222784 main.go:141] libmachine: Getting to WaitForSSH function...
	I0911 10:57:12.252136 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:12.254964 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.255611 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.255675 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.255724 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:12.255932 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.256124 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.256272 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:12.256470 2222784 main.go:141] libmachine: Using SSH client type: native
	I0911 10:57:12.257608 2222784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0911 10:57:12.257637 2222784 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0911 10:57:12.384242 2222784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 10:57:12.384286 2222784 main.go:141] libmachine: Detecting the provisioner...
	I0911 10:57:12.384298 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:12.387300 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.387707 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.387737 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.387954 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:12.388234 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.388406 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.388539 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:12.388675 2222784 main.go:141] libmachine: Using SSH client type: native
	I0911 10:57:12.389171 2222784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0911 10:57:12.389191 2222784 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0911 10:57:12.518389 2222784 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0911 10:57:12.518497 2222784 main.go:141] libmachine: found compatible host: buildroot
	I0911 10:57:12.518512 2222784 main.go:141] libmachine: Provisioning with buildroot...
	I0911 10:57:12.518524 2222784 main.go:141] libmachine: (addons-554886) Calling .GetMachineName
	I0911 10:57:12.518863 2222784 buildroot.go:166] provisioning hostname "addons-554886"
	I0911 10:57:12.518892 2222784 main.go:141] libmachine: (addons-554886) Calling .GetMachineName
	I0911 10:57:12.519134 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:12.521915 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.522257 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.522288 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.522421 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:12.522736 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.522945 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.523115 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:12.523340 2222784 main.go:141] libmachine: Using SSH client type: native
	I0911 10:57:12.523993 2222784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0911 10:57:12.524013 2222784 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-554886 && echo "addons-554886" | sudo tee /etc/hostname
	I0911 10:57:12.661186 2222784 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-554886
	
	I0911 10:57:12.661234 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:12.664403 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.664780 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.664835 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.665008 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:12.665233 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.665403 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.665589 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:12.665713 2222784 main.go:141] libmachine: Using SSH client type: native
	I0911 10:57:12.666143 2222784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0911 10:57:12.666172 2222784 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-554886' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-554886/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-554886' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 10:57:12.802281 2222784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 10:57:12.802318 2222784 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 10:57:12.802372 2222784 buildroot.go:174] setting up certificates
	I0911 10:57:12.802384 2222784 provision.go:83] configureAuth start
	I0911 10:57:12.802397 2222784 main.go:141] libmachine: (addons-554886) Calling .GetMachineName
	I0911 10:57:12.802720 2222784 main.go:141] libmachine: (addons-554886) Calling .GetIP
	I0911 10:57:12.805470 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.805953 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.805989 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.806144 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:12.808433 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.808711 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.808750 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.808916 2222784 provision.go:138] copyHostCerts
	I0911 10:57:12.809022 2222784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 10:57:12.809197 2222784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 10:57:12.809314 2222784 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 10:57:12.809386 2222784 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.addons-554886 san=[192.168.39.217 192.168.39.217 localhost 127.0.0.1 minikube addons-554886]
	I0911 10:57:12.973496 2222784 provision.go:172] copyRemoteCerts
	I0911 10:57:12.973571 2222784 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 10:57:12.973647 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:12.976547 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.976953 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:12.976992 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:12.977171 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:12.977453 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:12.977670 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:12.977913 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:13.070339 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 10:57:13.094670 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0911 10:57:13.122326 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 10:57:13.147305 2222784 provision.go:86] duration metric: configureAuth took 344.903278ms
	I0911 10:57:13.147342 2222784 buildroot.go:189] setting minikube options for container-runtime
	I0911 10:57:13.147571 2222784 config.go:182] Loaded profile config "addons-554886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 10:57:13.147654 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:13.151008 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.151477 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.151513 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.151708 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:13.151906 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.152095 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.152202 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:13.152378 2222784 main.go:141] libmachine: Using SSH client type: native
	I0911 10:57:13.152883 2222784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0911 10:57:13.152905 2222784 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 10:57:13.484319 2222784 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 10:57:13.484350 2222784 main.go:141] libmachine: Checking connection to Docker...
	I0911 10:57:13.484371 2222784 main.go:141] libmachine: (addons-554886) Calling .GetURL
	I0911 10:57:13.485510 2222784 main.go:141] libmachine: (addons-554886) DBG | Using libvirt version 6000000
	I0911 10:57:13.488021 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.488395 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.488432 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.488586 2222784 main.go:141] libmachine: Docker is up and running!
	I0911 10:57:13.488606 2222784 main.go:141] libmachine: Reticulating splines...
	I0911 10:57:13.488616 2222784 client.go:171] LocalClient.Create took 23.812331343s
	I0911 10:57:13.488644 2222784 start.go:167] duration metric: libmachine.API.Create for "addons-554886" took 23.812405041s
	I0911 10:57:13.488672 2222784 start.go:300] post-start starting for "addons-554886" (driver="kvm2")
	I0911 10:57:13.488688 2222784 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 10:57:13.488725 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:13.489001 2222784 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 10:57:13.489033 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:13.491388 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.491840 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.491865 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.492016 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:13.492215 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.492417 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:13.492562 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:13.589212 2222784 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 10:57:13.593876 2222784 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 10:57:13.593905 2222784 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 10:57:13.593999 2222784 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 10:57:13.594031 2222784 start.go:303] post-start completed in 105.347267ms
	I0911 10:57:13.594074 2222784 main.go:141] libmachine: (addons-554886) Calling .GetConfigRaw
	I0911 10:57:13.594746 2222784 main.go:141] libmachine: (addons-554886) Calling .GetIP
	I0911 10:57:13.597543 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.597980 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.598020 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.598346 2222784 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/config.json ...
	I0911 10:57:13.598530 2222784 start.go:128] duration metric: createHost completed in 23.942310791s
	I0911 10:57:13.598555 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:13.600595 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.601023 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.601054 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:13.601058 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.601244 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.601405 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.601552 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:13.601746 2222784 main.go:141] libmachine: Using SSH client type: native
	I0911 10:57:13.602242 2222784 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.217 22 <nil> <nil>}
	I0911 10:57:13.602258 2222784 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 10:57:13.729864 2222784 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694429833.705405453
	
	I0911 10:57:13.729889 2222784 fix.go:206] guest clock: 1694429833.705405453
	I0911 10:57:13.729900 2222784 fix.go:219] Guest: 2023-09-11 10:57:13.705405453 +0000 UTC Remote: 2023-09-11 10:57:13.598542808 +0000 UTC m=+24.055516436 (delta=106.862645ms)
	I0911 10:57:13.729960 2222784 fix.go:190] guest clock delta is within tolerance: 106.862645ms
	I0911 10:57:13.729972 2222784 start.go:83] releasing machines lock for "addons-554886", held for 24.073863036s
	I0911 10:57:13.730019 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:13.730338 2222784 main.go:141] libmachine: (addons-554886) Calling .GetIP
	I0911 10:57:13.733133 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.733502 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.733535 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.733665 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:13.734177 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:13.734343 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:13.734431 2222784 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 10:57:13.734493 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:13.734626 2222784 ssh_runner.go:195] Run: cat /version.json
	I0911 10:57:13.734657 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:13.737112 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.737433 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.737486 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.737522 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.737644 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:13.737858 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.737929 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:13.737956 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:13.738026 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:13.738106 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:13.738194 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:13.738251 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:13.738360 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:13.738496 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:13.826349 2222784 ssh_runner.go:195] Run: systemctl --version
	I0911 10:57:13.853408 2222784 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 10:57:14.027489 2222784 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 10:57:14.034545 2222784 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 10:57:14.034643 2222784 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 10:57:14.051149 2222784 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 10:57:14.051182 2222784 start.go:466] detecting cgroup driver to use...
	I0911 10:57:14.051256 2222784 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 10:57:14.064682 2222784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 10:57:14.077113 2222784 docker.go:196] disabling cri-docker service (if available) ...
	I0911 10:57:14.077190 2222784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 10:57:14.089823 2222784 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 10:57:14.102705 2222784 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 10:57:14.208601 2222784 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 10:57:14.331860 2222784 docker.go:212] disabling docker service ...
	I0911 10:57:14.331950 2222784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 10:57:14.346612 2222784 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 10:57:14.360206 2222784 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 10:57:14.471783 2222784 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 10:57:14.583587 2222784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 10:57:14.597510 2222784 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 10:57:14.615349 2222784 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 10:57:14.615412 2222784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 10:57:14.625912 2222784 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 10:57:14.625987 2222784 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 10:57:14.636603 2222784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 10:57:14.646811 2222784 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 10:57:14.657349 2222784 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 10:57:14.668487 2222784 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 10:57:14.677746 2222784 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 10:57:14.677814 2222784 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 10:57:14.691718 2222784 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 10:57:14.701671 2222784 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 10:57:14.810507 2222784 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 10:57:14.987239 2222784 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 10:57:14.987352 2222784 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 10:57:14.992560 2222784 start.go:534] Will wait 60s for crictl version
	I0911 10:57:14.992652 2222784 ssh_runner.go:195] Run: which crictl
	I0911 10:57:14.996637 2222784 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 10:57:15.027885 2222784 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 10:57:15.027998 2222784 ssh_runner.go:195] Run: crio --version
	I0911 10:57:15.072559 2222784 ssh_runner.go:195] Run: crio --version
	I0911 10:57:15.120762 2222784 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 10:57:15.122568 2222784 main.go:141] libmachine: (addons-554886) Calling .GetIP
	I0911 10:57:15.125498 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:15.125967 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:15.126002 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:15.126236 2222784 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 10:57:15.130729 2222784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 10:57:15.143465 2222784 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 10:57:15.143538 2222784 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 10:57:15.178122 2222784 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 10:57:15.178212 2222784 ssh_runner.go:195] Run: which lz4
	I0911 10:57:15.182388 2222784 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 10:57:15.187175 2222784 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 10:57:15.187210 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 10:57:17.109170 2222784 crio.go:444] Took 1.926813 seconds to copy over tarball
	I0911 10:57:17.109251 2222784 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 10:57:20.373708 2222784 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.264420589s)
	I0911 10:57:20.373741 2222784 crio.go:451] Took 3.264540 seconds to extract the tarball
	I0911 10:57:20.373754 2222784 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 10:57:20.418003 2222784 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 10:57:20.479160 2222784 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 10:57:20.479188 2222784 cache_images.go:84] Images are preloaded, skipping loading
	I0911 10:57:20.479266 2222784 ssh_runner.go:195] Run: crio config
	I0911 10:57:20.546922 2222784 cni.go:84] Creating CNI manager for ""
	I0911 10:57:20.546958 2222784 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 10:57:20.546980 2222784 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 10:57:20.547035 2222784 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.217 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-554886 NodeName:addons-554886 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.217"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.217 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 10:57:20.547211 2222784 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.217
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-554886"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.217
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.217"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 10:57:20.547325 2222784 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=addons-554886 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.217
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:addons-554886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 10:57:20.547403 2222784 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 10:57:20.558100 2222784 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 10:57:20.558180 2222784 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 10:57:20.568005 2222784 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (373 bytes)
	I0911 10:57:20.585110 2222784 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 10:57:20.602034 2222784 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2100 bytes)
	I0911 10:57:20.619459 2222784 ssh_runner.go:195] Run: grep 192.168.39.217	control-plane.minikube.internal$ /etc/hosts
	I0911 10:57:20.623465 2222784 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.217	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 10:57:20.635536 2222784 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886 for IP: 192.168.39.217
	I0911 10:57:20.635570 2222784 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.635768 2222784 certs.go:204] generating minikubeCA CA: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 10:57:20.737723 2222784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt ...
	I0911 10:57:20.737757 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt: {Name:mk3bdf40aaa3e971cbfc0bb665325eb0a5ce86d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.737936 2222784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key ...
	I0911 10:57:20.737948 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key: {Name:mkba3109852a7b32eb1bd9b47bfb518624795727 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.738024 2222784 certs.go:204] generating proxyClientCA CA: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 10:57:20.838028 2222784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt ...
	I0911 10:57:20.838074 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt: {Name:mk0a269b1262311a1d3492bb27a6644ac573d500 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.838281 2222784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key ...
	I0911 10:57:20.838296 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key: {Name:mk3f002372bc48948e14f9b7fb04e041aabdf242 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.838402 2222784 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.key
	I0911 10:57:20.838416 2222784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt with IP's: []
	I0911 10:57:20.971188 2222784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt ...
	I0911 10:57:20.971228 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: {Name:mk2b361d6ec44224f0767ee31fd839a9e614ba85 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.971455 2222784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.key ...
	I0911 10:57:20.971471 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.key: {Name:mk1acf4568d3df9938cb70ff61f23299e82ed04b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:20.971574 2222784 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.key.891f873f
	I0911 10:57:20.971596 2222784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.crt.891f873f with IP's: [192.168.39.217 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 10:57:21.405430 2222784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.crt.891f873f ...
	I0911 10:57:21.405469 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.crt.891f873f: {Name:mkf7a8c2c8249ef121fad574998703f4a9aa9102 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:21.405676 2222784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.key.891f873f ...
	I0911 10:57:21.405692 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.key.891f873f: {Name:mk0195efc836f6102a964acbf9831aec9ea7f2e6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:21.405798 2222784 certs.go:337] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.crt.891f873f -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.crt
	I0911 10:57:21.405915 2222784 certs.go:341] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.key.891f873f -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.key
	I0911 10:57:21.405989 2222784 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.key
	I0911 10:57:21.406016 2222784 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.crt with IP's: []
	I0911 10:57:21.559910 2222784 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.crt ...
	I0911 10:57:21.559946 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.crt: {Name:mk10f652c4ac947cb6aa5ca6e0a1aa76dbe78ec8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:21.560159 2222784 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.key ...
	I0911 10:57:21.560175 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.key: {Name:mk379e93aebb290122e9527116a9e359bea84285 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:21.560394 2222784 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 10:57:21.560436 2222784 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 10:57:21.560491 2222784 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 10:57:21.560517 2222784 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 10:57:21.561223 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 10:57:21.586545 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 10:57:21.610582 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 10:57:21.634275 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 10:57:21.657539 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 10:57:21.682338 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 10:57:21.707666 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 10:57:21.732625 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 10:57:21.756473 2222784 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 10:57:21.780561 2222784 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 10:57:21.797543 2222784 ssh_runner.go:195] Run: openssl version
	I0911 10:57:21.803539 2222784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 10:57:21.814049 2222784 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 10:57:21.818845 2222784 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 10:57:21.818917 2222784 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 10:57:21.824447 2222784 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 10:57:21.835378 2222784 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 10:57:21.839777 2222784 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 10:57:21.839881 2222784 kubeadm.go:404] StartCluster: {Name:addons-554886 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.1 ClusterName:addons-554886 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[
] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 10:57:21.840057 2222784 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 10:57:21.840114 2222784 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 10:57:21.874227 2222784 cri.go:89] found id: ""
	I0911 10:57:21.874305 2222784 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 10:57:21.885394 2222784 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 10:57:21.895710 2222784 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 10:57:21.906283 2222784 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 10:57:21.906339 2222784 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 10:57:22.105483 2222784 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 10:57:34.864226 2222784 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 10:57:34.864306 2222784 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 10:57:34.864429 2222784 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 10:57:34.864559 2222784 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 10:57:34.864714 2222784 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 10:57:34.864823 2222784 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 10:57:34.866745 2222784 out.go:204]   - Generating certificates and keys ...
	I0911 10:57:34.866832 2222784 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 10:57:34.866936 2222784 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 10:57:34.867050 2222784 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 10:57:34.867141 2222784 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 10:57:34.867232 2222784 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 10:57:34.867329 2222784 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 10:57:34.867407 2222784 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 10:57:34.867533 2222784 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [addons-554886 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0911 10:57:34.867633 2222784 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 10:57:34.867826 2222784 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [addons-554886 localhost] and IPs [192.168.39.217 127.0.0.1 ::1]
	I0911 10:57:34.867932 2222784 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 10:57:34.868012 2222784 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 10:57:34.868055 2222784 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 10:57:34.868102 2222784 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 10:57:34.868145 2222784 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 10:57:34.868191 2222784 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 10:57:34.868244 2222784 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 10:57:34.868291 2222784 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 10:57:34.868376 2222784 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 10:57:34.868460 2222784 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 10:57:34.870278 2222784 out.go:204]   - Booting up control plane ...
	I0911 10:57:34.870419 2222784 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 10:57:34.870518 2222784 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 10:57:34.870623 2222784 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 10:57:34.870767 2222784 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 10:57:34.870844 2222784 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 10:57:34.870878 2222784 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 10:57:34.871062 2222784 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 10:57:34.871150 2222784 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002903 seconds
	I0911 10:57:34.871301 2222784 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 10:57:34.871456 2222784 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 10:57:34.871543 2222784 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 10:57:34.871817 2222784 kubeadm.go:322] [mark-control-plane] Marking the node addons-554886 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 10:57:34.871919 2222784 kubeadm.go:322] [bootstrap-token] Using token: c827vt.rel7mk8dgs8gzzvy
	I0911 10:57:34.873583 2222784 out.go:204]   - Configuring RBAC rules ...
	I0911 10:57:34.873746 2222784 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 10:57:34.873849 2222784 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 10:57:34.873989 2222784 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 10:57:34.874138 2222784 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 10:57:34.874243 2222784 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 10:57:34.874372 2222784 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 10:57:34.874518 2222784 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 10:57:34.874574 2222784 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 10:57:34.874628 2222784 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 10:57:34.874641 2222784 kubeadm.go:322] 
	I0911 10:57:34.874723 2222784 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 10:57:34.874730 2222784 kubeadm.go:322] 
	I0911 10:57:34.874827 2222784 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 10:57:34.874849 2222784 kubeadm.go:322] 
	I0911 10:57:34.874889 2222784 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 10:57:34.874966 2222784 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 10:57:34.875044 2222784 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 10:57:34.875056 2222784 kubeadm.go:322] 
	I0911 10:57:34.875137 2222784 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 10:57:34.875146 2222784 kubeadm.go:322] 
	I0911 10:57:34.875225 2222784 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 10:57:34.875235 2222784 kubeadm.go:322] 
	I0911 10:57:34.875282 2222784 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 10:57:34.875367 2222784 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 10:57:34.875447 2222784 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 10:57:34.875460 2222784 kubeadm.go:322] 
	I0911 10:57:34.875572 2222784 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 10:57:34.875684 2222784 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 10:57:34.875694 2222784 kubeadm.go:322] 
	I0911 10:57:34.875801 2222784 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token c827vt.rel7mk8dgs8gzzvy \
	I0911 10:57:34.875887 2222784 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 10:57:34.875910 2222784 kubeadm.go:322] 	--control-plane 
	I0911 10:57:34.875916 2222784 kubeadm.go:322] 
	I0911 10:57:34.875996 2222784 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 10:57:34.876009 2222784 kubeadm.go:322] 
	I0911 10:57:34.876112 2222784 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token c827vt.rel7mk8dgs8gzzvy \
	I0911 10:57:34.876284 2222784 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 10:57:34.876301 2222784 cni.go:84] Creating CNI manager for ""
	I0911 10:57:34.876308 2222784 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 10:57:34.878270 2222784 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 10:57:34.879867 2222784 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 10:57:34.947968 2222784 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 10:57:35.006384 2222784 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 10:57:35.006505 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:35.006526 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=addons-554886 minikube.k8s.io/updated_at=2023_09_11T10_57_35_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:35.040722 2222784 ops.go:34] apiserver oom_adj: -16
	I0911 10:57:35.218917 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:35.318269 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:35.915863 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:36.415839 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:36.916099 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:37.415304 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:37.915424 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:38.415957 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:38.915596 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:39.415973 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:39.915166 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:40.415249 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:40.915495 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:41.415898 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:41.916006 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:42.415317 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:42.915377 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:43.415684 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:43.916133 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:44.415863 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:44.915891 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:45.415492 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:45.915168 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:46.415858 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:46.915914 2222784 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 10:57:47.049137 2222784 kubeadm.go:1081] duration metric: took 12.042707541s to wait for elevateKubeSystemPrivileges.
	I0911 10:57:47.049173 2222784 kubeadm.go:406] StartCluster complete in 25.209305474s
	I0911 10:57:47.049200 2222784 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:47.049408 2222784 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 10:57:47.049953 2222784 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 10:57:47.050235 2222784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 10:57:47.050246 2222784 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:true]
	I0911 10:57:47.050421 2222784 addons.go:69] Setting helm-tiller=true in profile "addons-554886"
	I0911 10:57:47.050442 2222784 addons.go:69] Setting metrics-server=true in profile "addons-554886"
	I0911 10:57:47.050447 2222784 addons.go:69] Setting inspektor-gadget=true in profile "addons-554886"
	I0911 10:57:47.050480 2222784 addons.go:231] Setting addon metrics-server=true in "addons-554886"
	I0911 10:57:47.050483 2222784 addons.go:231] Setting addon helm-tiller=true in "addons-554886"
	I0911 10:57:47.050465 2222784 addons.go:69] Setting ingress=true in profile "addons-554886"
	I0911 10:57:47.050487 2222784 config.go:182] Loaded profile config "addons-554886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 10:57:47.050504 2222784 addons.go:69] Setting registry=true in profile "addons-554886"
	I0911 10:57:47.050512 2222784 addons.go:231] Setting addon ingress=true in "addons-554886"
	I0911 10:57:47.050517 2222784 addons.go:231] Setting addon registry=true in "addons-554886"
	I0911 10:57:47.050490 2222784 addons.go:69] Setting ingress-dns=true in profile "addons-554886"
	I0911 10:57:47.050567 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.050579 2222784 addons.go:69] Setting cloud-spanner=true in profile "addons-554886"
	I0911 10:57:47.050579 2222784 addons.go:69] Setting default-storageclass=true in profile "addons-554886"
	I0911 10:57:47.050588 2222784 addons.go:69] Setting gcp-auth=true in profile "addons-554886"
	I0911 10:57:47.050591 2222784 addons.go:231] Setting addon cloud-spanner=true in "addons-554886"
	I0911 10:57:47.050590 2222784 addons.go:69] Setting storage-provisioner=true in profile "addons-554886"
	I0911 10:57:47.050598 2222784 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-554886"
	I0911 10:57:47.050604 2222784 mustload.go:65] Loading cluster: addons-554886
	I0911 10:57:47.050619 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.050619 2222784 addons.go:231] Setting addon storage-provisioner=true in "addons-554886"
	I0911 10:57:47.050484 2222784 addons.go:231] Setting addon inspektor-gadget=true in "addons-554886"
	I0911 10:57:47.050800 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.050803 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.050815 2222784 config.go:182] Loaded profile config "addons-554886": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 10:57:47.051115 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051115 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.050568 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.050567 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.051156 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.051167 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051197 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.050580 2222784 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-554886"
	I0911 10:57:47.051296 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051318 2222784 addons.go:231] Setting addon csi-hostpath-driver=true in "addons-554886"
	I0911 10:57:47.051357 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.051360 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.051392 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051415 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.051478 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051486 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051498 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.051514 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.050428 2222784 addons.go:69] Setting volumesnapshots=true in profile "addons-554886"
	I0911 10:57:47.051556 2222784 addons.go:231] Setting addon volumesnapshots=true in "addons-554886"
	I0911 10:57:47.051591 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.051116 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051647 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.050570 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.051710 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051742 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.050571 2222784 addons.go:231] Setting addon ingress-dns=true in "addons-554886"
	I0911 10:57:47.051842 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.051279 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.051926 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.051953 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.052057 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.052084 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.071966 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44795
	I0911 10:57:47.071983 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35329
	I0911 10:57:47.071965 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41335
	I0911 10:57:47.072421 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.072490 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.073116 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.073137 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.073120 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.073163 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.073571 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.073618 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45253
	I0911 10:57:47.074170 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.074222 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.074260 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.074262 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.074730 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.074759 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.075117 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.075305 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.081120 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.081180 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.081426 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.081471 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.081490 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.081510 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39539
	I0911 10:57:47.081560 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.081931 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.081963 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.082033 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.082053 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.082101 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.082559 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.082593 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.085959 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45473
	I0911 10:57:47.086534 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.087153 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.087190 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.087555 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.088096 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.088144 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.091219 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46141
	I0911 10:57:47.091831 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.092462 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.092480 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.092906 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.093146 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.093824 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.094450 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33697
	I0911 10:57:47.094678 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.094696 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.095188 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.095299 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.096009 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.096028 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.096238 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.096281 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.096387 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.096545 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.096611 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39231
	I0911 10:57:47.097517 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.099457 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.099477 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.100061 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.100663 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.100710 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.100883 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.101195 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45333
	I0911 10:57:47.103766 2222784 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.6.4
	I0911 10:57:47.101815 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.102668 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46223
	I0911 10:57:47.104360 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35187
	I0911 10:57:47.105599 2222784 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 10:57:47.105614 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 10:57:47.105639 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.106054 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.106313 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.106330 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.106634 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.106765 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.107438 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.107486 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.107889 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.107908 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.108429 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.109032 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.109075 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.109216 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.109352 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.109366 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.109831 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.109903 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.109948 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.109985 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.110212 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.110649 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.110687 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.110822 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.110996 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.113642 2222784 addons.go:231] Setting addon default-storageclass=true in "addons-554886"
	I0911 10:57:47.113693 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:47.114058 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.114105 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.126652 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42407
	I0911 10:57:47.126846 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34589
	I0911 10:57:47.127034 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35785
	I0911 10:57:47.127567 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.127669 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.128220 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.128420 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.128432 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.128784 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.128805 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.128894 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42809
	I0911 10:57:47.128995 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.129279 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.129299 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.129365 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.129434 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.129484 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.130114 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.130137 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.130358 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.130881 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.131070 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.131070 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.131694 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.131775 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.134318 2222784 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.9
	I0911 10:57:47.132541 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.133371 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.133859 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.136190 2222784 addons.go:423] installing /etc/kubernetes/addons/deployment.yaml
	I0911 10:57:47.136212 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1003 bytes)
	I0911 10:57:47.136234 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.138197 2222784 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.8.2
	I0911 10:57:47.136445 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41473
	I0911 10:57:47.138697 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42095
	I0911 10:57:47.138788 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45593
	I0911 10:57:47.139681 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.140337 2222784 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.20.0
	I0911 10:57:47.142127 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0911 10:57:47.142146 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0911 10:57:47.142169 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.140360 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.142247 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.140245 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.140294 2222784 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 10:57:47.140996 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.141052 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.141495 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.142479 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.142733 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0911 10:57:47.144329 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44831
	I0911 10:57:47.144351 2222784 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 10:57:47.144518 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 10:57:47.144542 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.144637 2222784 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0911 10:57:47.146273 2222784 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0911 10:57:47.144896 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.144953 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.145556 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.145589 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44519
	I0911 10:57:47.145699 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.145770 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.145778 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.145878 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.146737 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.148077 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.148255 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.148370 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.148422 2222784 addons.go:423] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0911 10:57:47.148440 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16083 bytes)
	I0911 10:57:47.148461 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.148424 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.148439 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.148503 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.148854 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.148864 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.148915 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.148933 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.148947 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.148961 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.148958 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.149020 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.149064 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.149080 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.149084 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.149136 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.149426 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.149666 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.149932 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.149988 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.150009 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.150039 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.150394 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.150816 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.150979 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.150999 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.151076 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.151385 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.151481 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.151596 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.153173 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.153358 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.153590 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.156060 2222784 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0911 10:57:47.154240 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.154288 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.156297 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.156315 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.156339 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.156914 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.157191 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36629
	I0911 10:57:47.157662 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.157671 2222784 addons.go:423] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0911 10:57:47.157681 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0911 10:57:47.157686 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.157695 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.158096 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.158093 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.159615 2222784 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0911 10:57:47.158286 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.158527 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.160759 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.161131 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0911 10:57:47.161295 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.161468 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.162598 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0911 10:57:47.164090 2222784 out.go:177]   - Using image docker.io/registry:2.8.1
	I0911 10:57:47.163120 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.162662 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.164781 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.165779 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0911 10:57:47.165894 2222784 addons.go:423] installing /etc/kubernetes/addons/registry-rc.yaml
	I0911 10:57:47.165910 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.167459 2222784 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0911 10:57:47.167479 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0911 10:57:47.167498 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.167506 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.167560 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0911 10:57:47.170596 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.169167 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.170605 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0911 10:57:47.167800 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.168806 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32857
	I0911 10:57:47.167718 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.170938 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.171117 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.172441 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.174208 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0911 10:57:47.172583 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.172656 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.173149 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.173158 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:47.174035 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.174577 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.177470 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0911 10:57:47.175916 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.175945 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.175969 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:47.176069 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.176342 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.179095 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.180877 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0911 10:57:47.179222 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.179493 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.179539 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.179697 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.184067 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0911 10:57:47.182641 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.182697 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.187132 2222784 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0911 10:57:47.187785 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.188841 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0911 10:57:47.188908 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0911 10:57:47.188939 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.191064 2222784 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0911 10:57:47.192728 2222784 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0911 10:57:47.191870 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.192747 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0911 10:57:47.192568 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.192769 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.192772 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.192798 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.192958 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.193071 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.193167 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.195677 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.196097 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.196127 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.196253 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.196450 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.196611 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.196790 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.199711 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45967
	I0911 10:57:47.200189 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:47.200680 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:47.200703 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:47.201036 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:47.201236 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:47.202689 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:47.202933 2222784 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 10:57:47.202951 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 10:57:47.202970 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:47.205763 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.206235 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:47.206264 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:47.206489 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:47.206653 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:47.206826 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:47.206975 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:47.251512 2222784 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-554886" context rescaled to 1 replicas
	I0911 10:57:47.251559 2222784 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.217 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 10:57:47.254075 2222784 out.go:177] * Verifying Kubernetes components...
	I0911 10:57:47.255792 2222784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 10:57:47.474561 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 10:57:47.476120 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0911 10:57:47.476157 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0911 10:57:47.488572 2222784 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0911 10:57:47.488607 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0911 10:57:47.526850 2222784 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 10:57:47.526873 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0911 10:57:47.535541 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0911 10:57:47.545357 2222784 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 10:57:47.546159 2222784 node_ready.go:35] waiting up to 6m0s for node "addons-554886" to be "Ready" ...
	I0911 10:57:47.560182 2222784 node_ready.go:49] node "addons-554886" has status "Ready":"True"
	I0911 10:57:47.560214 2222784 node_ready.go:38] duration metric: took 14.028725ms waiting for node "addons-554886" to be "Ready" ...
	I0911 10:57:47.560227 2222784 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 10:57:47.578753 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0911 10:57:47.583979 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 10:57:47.601034 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0911 10:57:47.601061 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0911 10:57:47.607288 2222784 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0911 10:57:47.607315 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0911 10:57:47.609842 2222784 addons.go:423] installing /etc/kubernetes/addons/registry-svc.yaml
	I0911 10:57:47.609865 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0911 10:57:47.620394 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0911 10:57:47.622537 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-role.yaml
	I0911 10:57:47.622561 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0911 10:57:47.646978 2222784 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0911 10:57:47.647011 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0911 10:57:47.659367 2222784 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 10:57:47.659397 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 10:57:47.674901 2222784 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace to be "Ready" ...
	I0911 10:57:47.717115 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0911 10:57:47.717143 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0911 10:57:47.840573 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0911 10:57:47.840599 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0911 10:57:47.842600 2222784 addons.go:423] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0911 10:57:47.842620 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0911 10:57:47.905975 2222784 addons.go:423] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0911 10:57:47.906006 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0911 10:57:47.920946 2222784 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 10:57:47.920976 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 10:57:47.930968 2222784 addons.go:423] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0911 10:57:47.930995 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0911 10:57:47.934699 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0911 10:57:47.934723 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0911 10:57:48.245609 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0911 10:57:48.292944 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0911 10:57:48.298530 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 10:57:48.303414 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0911 10:57:48.303440 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0911 10:57:48.308184 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0911 10:57:48.308211 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0911 10:57:48.319567 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0911 10:57:48.319594 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0911 10:57:48.350383 2222784 addons.go:423] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0911 10:57:48.350416 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0911 10:57:48.399621 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0911 10:57:48.399652 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0911 10:57:48.433534 2222784 addons.go:423] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0911 10:57:48.433562 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0911 10:57:48.444367 2222784 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0911 10:57:48.444397 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0911 10:57:48.509369 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-crd.yaml
	I0911 10:57:48.509398 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0911 10:57:48.545518 2222784 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0911 10:57:48.545550 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0911 10:57:48.559893 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0911 10:57:48.601433 2222784 addons.go:423] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0911 10:57:48.601463 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7741 bytes)
	I0911 10:57:48.618779 2222784 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0911 10:57:48.618811 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0911 10:57:48.691957 2222784 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0911 10:57:48.691987 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0911 10:57:48.692060 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0911 10:57:48.749965 2222784 addons.go:423] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0911 10:57:48.750000 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0911 10:57:48.817989 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0911 10:57:50.649963 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:57:51.897287 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.42268586s)
	I0911 10:57:51.897358 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:51.897375 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:51.897757 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:51.897834 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:51.897876 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:51.897893 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:51.897799 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:51.898143 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:51.898175 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:51.898195 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:51.898209 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:51.898419 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:51.898434 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:53.188237 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.652645319s)
	I0911 10:57:53.188311 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:53.188330 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:53.188351 2222784 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.642955122s)
	I0911 10:57:53.188391 2222784 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0911 10:57:53.188889 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:53.188910 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:53.188921 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:53.188931 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:53.189228 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:53.189256 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:53.189234 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:53.245809 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:57:54.208377 2222784 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0911 10:57:54.208432 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:54.212265 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:54.212748 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:54.212786 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:54.213019 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:54.213295 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:54.213492 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:54.213709 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:54.626241 2222784 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0911 10:57:54.655335 2222784 addons.go:231] Setting addon gcp-auth=true in "addons-554886"
	I0911 10:57:54.655415 2222784 host.go:66] Checking if "addons-554886" exists ...
	I0911 10:57:54.655772 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:54.655831 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:54.671666 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44433
	I0911 10:57:54.672147 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:54.672726 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:54.672762 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:54.673209 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:54.673924 2222784 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 10:57:54.673985 2222784 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 10:57:54.690497 2222784 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33797
	I0911 10:57:54.691034 2222784 main.go:141] libmachine: () Calling .GetVersion
	I0911 10:57:54.691676 2222784 main.go:141] libmachine: Using API Version  1
	I0911 10:57:54.691697 2222784 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 10:57:54.692078 2222784 main.go:141] libmachine: () Calling .GetMachineName
	I0911 10:57:54.692299 2222784 main.go:141] libmachine: (addons-554886) Calling .GetState
	I0911 10:57:54.694284 2222784 main.go:141] libmachine: (addons-554886) Calling .DriverName
	I0911 10:57:54.694567 2222784 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0911 10:57:54.694606 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHHostname
	I0911 10:57:54.697550 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:54.697970 2222784 main.go:141] libmachine: (addons-554886) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c7:87:82", ip: ""} in network mk-addons-554886: {Iface:virbr1 ExpiryTime:2023-09-11 11:57:06 +0000 UTC Type:0 Mac:52:54:00:c7:87:82 Iaid: IPaddr:192.168.39.217 Prefix:24 Hostname:addons-554886 Clientid:01:52:54:00:c7:87:82}
	I0911 10:57:54.698006 2222784 main.go:141] libmachine: (addons-554886) DBG | domain addons-554886 has defined IP address 192.168.39.217 and MAC address 52:54:00:c7:87:82 in network mk-addons-554886
	I0911 10:57:54.698168 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHPort
	I0911 10:57:54.698390 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHKeyPath
	I0911 10:57:54.698576 2222784 main.go:141] libmachine: (addons-554886) Calling .GetSSHUsername
	I0911 10:57:54.698757 2222784 sshutil.go:53] new ssh client: &{IP:192.168.39.217 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/addons-554886/id_rsa Username:docker}
	I0911 10:57:55.423951 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:57:56.443860 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (8.859844031s)
	I0911 10:57:56.443890 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (8.865094384s)
	I0911 10:57:56.443931 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.443933 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.443955 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.443978 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.443981 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (8.823551965s)
	I0911 10:57:56.444015 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444033 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444069 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (8.198427892s)
	I0911 10:57:56.444091 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444101 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444152 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (8.15117404s)
	I0911 10:57:56.444248 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (8.145684356s)
	I0911 10:57:56.444473 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.752378074s)
	I0911 10:57:56.444495 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444502 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444512 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444386 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (7.884455517s)
	I0911 10:57:56.444516 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	W0911 10:57:56.444558 2222784 addons.go:449] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0911 10:57:56.444475 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444597 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444606 2222784 retry.go:31] will retry after 311.309959ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0911 10:57:56.444806 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.444833 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.444835 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.444843 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444852 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444858 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.444888 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.444896 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.444906 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444914 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444927 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.444936 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.444945 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.444953 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.444997 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.445012 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.445021 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.445031 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.445166 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.445178 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.445188 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.445196 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.445276 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.445299 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.445307 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.445657 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.445692 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.445705 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.445741 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.446377 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.446395 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.446407 2222784 addons.go:467] Verifying addon registry=true in "addons-554886"
	I0911 10:57:56.449802 2222784 out.go:177] * Verifying registry addon...
	I0911 10:57:56.446806 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.446836 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.446870 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.446892 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.446907 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.446926 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.447007 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.447029 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.451319 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.451355 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.451357 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.451366 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.451368 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.451376 2222784 addons.go:467] Verifying addon ingress=true in "addons-554886"
	I0911 10:57:56.451405 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:56.451424 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.451377 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:56.453090 2222784 out.go:177] * Verifying ingress addon...
	I0911 10:57:56.451750 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.451756 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.451771 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:56.451792 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:56.452471 2222784 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0911 10:57:56.454696 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.454723 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:56.454755 2222784 addons.go:467] Verifying addon metrics-server=true in "addons-554886"
	I0911 10:57:56.455387 2222784 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0911 10:57:56.476306 2222784 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0911 10:57:56.476338 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:56.479667 2222784 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0911 10:57:56.479699 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:56.496088 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:56.496501 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:56.757106 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0911 10:57:57.102705 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:57.198223 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:57.375881 2222784 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (2.681278444s)
	I0911 10:57:57.375940 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (8.557882874s)
	I0911 10:57:57.378022 2222784 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v20230407
	I0911 10:57:57.375998 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:57.379560 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:57.381293 2222784 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.0
	I0911 10:57:57.379967 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:57.380003 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:57.382772 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:57.382800 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:57.382819 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:57.382858 2222784 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0911 10:57:57.382882 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0911 10:57:57.383087 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:57.383139 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:57.383158 2222784 addons.go:467] Verifying addon csi-hostpath-driver=true in "addons-554886"
	I0911 10:57:57.384757 2222784 out.go:177] * Verifying csi-hostpath-driver addon...
	I0911 10:57:57.387120 2222784 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0911 10:57:57.449040 2222784 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0911 10:57:57.449068 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0911 10:57:57.452650 2222784 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0911 10:57:57.452675 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:57:57.469985 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:57:57.499030 2222784 addons.go:423] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0911 10:57:57.499067 2222784 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5412 bytes)
	I0911 10:57:57.507869 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:57.510663 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:57.525368 2222784 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0911 10:57:57.832205 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:57:57.977493 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:57:58.069106 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:58.069186 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:58.480276 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:57:58.510182 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:58.514574 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:59.013654 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:57:59.064028 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:59.065494 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:59.399161 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.641967973s)
	I0911 10:57:59.399265 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:59.399288 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:59.399708 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:59.399730 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:59.399746 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:59.399762 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:59.399995 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:59.400013 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:59.490944 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:57:59.526657 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:57:59.526990 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:57:59.715486 2222784 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (2.190059706s)
	I0911 10:57:59.715563 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:59.715609 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:59.716132 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:59.716155 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:59.716166 2222784 main.go:141] libmachine: Making call to close driver server
	I0911 10:57:59.716175 2222784 main.go:141] libmachine: (addons-554886) Calling .Close
	I0911 10:57:59.716215 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:59.716547 2222784 main.go:141] libmachine: Successfully made call to close driver server
	I0911 10:57:59.716564 2222784 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 10:57:59.716569 2222784 main.go:141] libmachine: (addons-554886) DBG | Closing plugin on server side
	I0911 10:57:59.718685 2222784 addons.go:467] Verifying addon gcp-auth=true in "addons-554886"
	I0911 10:57:59.720849 2222784 out.go:177] * Verifying gcp-auth addon...
	I0911 10:57:59.723523 2222784 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0911 10:57:59.761356 2222784 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0911 10:57:59.761381 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:57:59.788147 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:57:59.986976 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:00.014888 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:00.014894 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:00.298192 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:00.301635 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:00.477077 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:00.502601 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:00.504351 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:00.803672 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:00.982323 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:01.008500 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:01.009449 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:01.311269 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:01.480125 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:01.503487 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:01.504185 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:01.802004 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:01.976228 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:02.003570 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:02.003742 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:02.296963 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:02.477085 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:02.504665 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:02.506604 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:02.793642 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:02.794659 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:02.976382 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:03.003788 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:03.004330 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:03.293482 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:03.478382 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:03.504044 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:03.505346 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:03.795206 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:03.980426 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:04.026241 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:04.026251 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:04.312506 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:04.479695 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:04.502951 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:04.503437 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:04.807023 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:04.808462 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:04.993860 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:05.030610 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:05.035528 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:05.296491 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:05.483035 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:05.504008 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:05.504045 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:05.794777 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:05.977304 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:06.007413 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:06.008772 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:06.322715 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:06.480144 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:06.502312 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:06.508620 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:06.793288 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:06.975858 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:07.005458 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:07.009212 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:07.298679 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:07.302394 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:07.476141 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:07.510687 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:07.514128 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:07.792928 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:07.975808 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:08.031517 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:08.033842 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:08.293861 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:08.476945 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:08.506223 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:08.506522 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:08.800972 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:08.985556 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:09.004935 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:09.005047 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:09.301734 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:09.313811 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:09.479405 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:09.519266 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:09.519534 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:09.797931 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:09.981036 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:10.012391 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:10.015694 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:10.294665 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:10.476466 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:10.510874 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:10.512422 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:10.801521 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:11.399467 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:11.401355 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:11.404364 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:11.405093 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:11.515099 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:11.553234 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:11.585665 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:11.585920 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:11.803479 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:11.987560 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:12.005278 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:12.012613 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:12.293939 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:12.478848 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:12.504170 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:12.504322 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:12.821884 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:12.978908 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:13.006174 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:13.006341 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:13.296404 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:13.476446 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:13.503658 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:13.504946 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:13.795014 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:13.809313 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:13.976736 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:14.004875 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:14.009615 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:14.292891 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:14.476016 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:14.507970 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:14.508582 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:14.800694 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:14.976492 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:15.004526 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:15.010056 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:15.293161 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:15.477618 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:15.502949 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:15.504002 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:15.797783 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:15.975689 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:16.012104 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:16.012710 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:16.299343 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:16.299547 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:16.485185 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:16.506709 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:16.513680 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:16.805119 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:16.976470 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:17.006232 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:17.007871 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:17.296107 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:17.477257 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:17.511135 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:17.513042 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:17.793749 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:17.976313 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:18.004354 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:18.005021 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:18.292917 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:18.477101 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:18.503129 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:18.507936 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:18.804237 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:18.805281 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:18.977845 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:19.003361 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:19.003432 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:19.295065 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:19.478841 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:19.502147 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:19.503084 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:19.852914 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:19.976783 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:20.004669 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:20.012613 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:20.292677 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:20.478029 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:20.502968 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:20.511012 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:20.800018 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:20.806517 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:20.979634 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:21.003053 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:21.003664 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:21.293742 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:21.478336 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:21.502841 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:21.502871 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:21.794124 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:22.044221 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:22.045317 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:22.045872 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:22.292893 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:22.479063 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:22.501955 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:22.502388 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:22.953166 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:22.956278 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:22.978957 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:23.004914 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:23.005099 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:23.294457 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:23.479523 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:23.501283 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:23.502741 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:23.798397 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:23.976806 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:24.005986 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:24.006097 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:24.294064 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:24.476893 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:24.504499 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:24.507740 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:24.795802 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:24.977747 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:25.003099 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:25.004911 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:25.296148 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:25.299533 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:25.479573 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:25.504607 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:25.504756 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:25.801011 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:25.985024 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:26.006579 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:26.007317 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:26.301569 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:26.476888 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:26.501642 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:26.501973 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:26.795744 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:26.977708 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:27.003559 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:27.006708 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:27.293803 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:27.477327 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:27.502604 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:27.502916 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:27.795812 2222784 pod_ready.go:102] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"False"
	I0911 10:58:27.798995 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:27.982368 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:28.003200 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:28.004627 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:28.311165 2222784 pod_ready.go:92] pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace has status "Ready":"True"
	I0911 10:58:28.311202 2222784 pod_ready.go:81] duration metric: took 40.636263094s waiting for pod "coredns-5dd5756b68-2cg8c" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.311217 2222784 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-nprn6" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.312082 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:28.316235 2222784 pod_ready.go:97] error getting pod "coredns-5dd5756b68-nprn6" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-nprn6" not found
	I0911 10:58:28.316274 2222784 pod_ready.go:81] duration metric: took 5.048768ms waiting for pod "coredns-5dd5756b68-nprn6" in "kube-system" namespace to be "Ready" ...
	E0911 10:58:28.316289 2222784 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-nprn6" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-nprn6" not found
	I0911 10:58:28.316301 2222784 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.329937 2222784 pod_ready.go:92] pod "etcd-addons-554886" in "kube-system" namespace has status "Ready":"True"
	I0911 10:58:28.329970 2222784 pod_ready.go:81] duration metric: took 13.661212ms waiting for pod "etcd-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.329987 2222784 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.339437 2222784 pod_ready.go:92] pod "kube-apiserver-addons-554886" in "kube-system" namespace has status "Ready":"True"
	I0911 10:58:28.339469 2222784 pod_ready.go:81] duration metric: took 9.474337ms waiting for pod "kube-apiserver-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.339486 2222784 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.369563 2222784 pod_ready.go:92] pod "kube-controller-manager-addons-554886" in "kube-system" namespace has status "Ready":"True"
	I0911 10:58:28.369601 2222784 pod_ready.go:81] duration metric: took 30.106704ms waiting for pod "kube-controller-manager-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.369618 2222784 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-96wzg" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.483062 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:28.493980 2222784 pod_ready.go:92] pod "kube-proxy-96wzg" in "kube-system" namespace has status "Ready":"True"
	I0911 10:58:28.494011 2222784 pod_ready.go:81] duration metric: took 124.382695ms waiting for pod "kube-proxy-96wzg" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.494025 2222784 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.505107 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:28.506039 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:28.872892 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:28.891008 2222784 pod_ready.go:92] pod "kube-scheduler-addons-554886" in "kube-system" namespace has status "Ready":"True"
	I0911 10:58:28.891034 2222784 pod_ready.go:81] duration metric: took 397.00219ms waiting for pod "kube-scheduler-addons-554886" in "kube-system" namespace to be "Ready" ...
	I0911 10:58:28.891043 2222784 pod_ready.go:38] duration metric: took 41.330801285s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 10:58:28.891064 2222784 api_server.go:52] waiting for apiserver process to appear ...
	I0911 10:58:28.891128 2222784 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 10:58:28.932638 2222784 api_server.go:72] duration metric: took 41.681036941s to wait for apiserver process to appear ...
	I0911 10:58:28.932678 2222784 api_server.go:88] waiting for apiserver healthz status ...
	I0911 10:58:28.932697 2222784 api_server.go:253] Checking apiserver healthz at https://192.168.39.217:8443/healthz ...
	I0911 10:58:28.938067 2222784 api_server.go:279] https://192.168.39.217:8443/healthz returned 200:
	ok
	I0911 10:58:28.939499 2222784 api_server.go:141] control plane version: v1.28.1
	I0911 10:58:28.939526 2222784 api_server.go:131] duration metric: took 6.840014ms to wait for apiserver health ...
	I0911 10:58:28.939536 2222784 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 10:58:28.976323 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:29.002935 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:29.004714 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:29.105285 2222784 system_pods.go:59] 17 kube-system pods found
	I0911 10:58:29.105320 2222784 system_pods.go:61] "coredns-5dd5756b68-2cg8c" [a229e351-155b-4d57-9746-e272bb98598b] Running
	I0911 10:58:29.105329 2222784 system_pods.go:61] "csi-hostpath-attacher-0" [245e9000-d196-429f-bf8a-ecced1fb4a71] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0911 10:58:29.105338 2222784 system_pods.go:61] "csi-hostpath-resizer-0" [62c4130b-1a92-424a-a665-557da4d3f75b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0911 10:58:29.105346 2222784 system_pods.go:61] "csi-hostpathplugin-nwdhc" [239b8e34-6457-4c49-8ad7-1947faae7550] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0911 10:58:29.105351 2222784 system_pods.go:61] "etcd-addons-554886" [de10e470-2588-4a7f-8e8e-8c84386ee6c5] Running
	I0911 10:58:29.105356 2222784 system_pods.go:61] "kube-apiserver-addons-554886" [c8aff2d0-df06-48cd-a21b-e1b060e3be2d] Running
	I0911 10:58:29.105360 2222784 system_pods.go:61] "kube-controller-manager-addons-554886" [1480e1eb-ad72-4c18-a9a8-a2528659fbf1] Running
	I0911 10:58:29.105367 2222784 system_pods.go:61] "kube-ingress-dns-minikube" [3715ae8a-f6d7-4bfc-b92c-a3586056893e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0911 10:58:29.105371 2222784 system_pods.go:61] "kube-proxy-96wzg" [0655ce43-1406-45df-96a8-df0f9f378891] Running
	I0911 10:58:29.105375 2222784 system_pods.go:61] "kube-scheduler-addons-554886" [b2c82861-60fb-45da-8a82-d487c1c1301c] Running
	I0911 10:58:29.105381 2222784 system_pods.go:61] "metrics-server-7c66d45ddc-7krqz" [68915a10-f10d-4296-8a14-8c21f7f71a42] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 10:58:29.105389 2222784 system_pods.go:61] "registry-proxy-lmsgk" [c3d6d669-7454-4529-b9ac-06abb4face91] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0911 10:58:29.105398 2222784 system_pods.go:61] "registry-t6754" [8531b6ac-003f-4a6d-aab4-67819497ab11] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0911 10:58:29.105406 2222784 system_pods.go:61] "snapshot-controller-58dbcc7b99-2nql9" [e0c9b597-80fb-4724-8eef-0e970bed2638] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0911 10:58:29.105428 2222784 system_pods.go:61] "snapshot-controller-58dbcc7b99-9f7nb" [0aed7656-2dfe-4ac7-ad14-ab43a08a531f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0911 10:58:29.105433 2222784 system_pods.go:61] "storage-provisioner" [a512a348-5ded-427c-886d-f1ea3077d8ad] Running
	I0911 10:58:29.105439 2222784 system_pods.go:61] "tiller-deploy-7b677967b9-dtz9n" [871f81ec-dd78-4aa4-89e9-5b99419aa8d5] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0911 10:58:29.105449 2222784 system_pods.go:74] duration metric: took 165.906891ms to wait for pod list to return data ...
	I0911 10:58:29.105460 2222784 default_sa.go:34] waiting for default service account to be created ...
	I0911 10:58:29.290200 2222784 default_sa.go:45] found service account: "default"
	I0911 10:58:29.290228 2222784 default_sa.go:55] duration metric: took 184.762583ms for default service account to be created ...
	I0911 10:58:29.290238 2222784 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 10:58:29.293354 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:29.476612 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:29.501275 2222784 system_pods.go:86] 17 kube-system pods found
	I0911 10:58:29.501305 2222784 system_pods.go:89] "coredns-5dd5756b68-2cg8c" [a229e351-155b-4d57-9746-e272bb98598b] Running
	I0911 10:58:29.501314 2222784 system_pods.go:89] "csi-hostpath-attacher-0" [245e9000-d196-429f-bf8a-ecced1fb4a71] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0911 10:58:29.501322 2222784 system_pods.go:89] "csi-hostpath-resizer-0" [62c4130b-1a92-424a-a665-557da4d3f75b] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0911 10:58:29.501330 2222784 system_pods.go:89] "csi-hostpathplugin-nwdhc" [239b8e34-6457-4c49-8ad7-1947faae7550] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0911 10:58:29.501337 2222784 system_pods.go:89] "etcd-addons-554886" [de10e470-2588-4a7f-8e8e-8c84386ee6c5] Running
	I0911 10:58:29.501342 2222784 system_pods.go:89] "kube-apiserver-addons-554886" [c8aff2d0-df06-48cd-a21b-e1b060e3be2d] Running
	I0911 10:58:29.501347 2222784 system_pods.go:89] "kube-controller-manager-addons-554886" [1480e1eb-ad72-4c18-a9a8-a2528659fbf1] Running
	I0911 10:58:29.501355 2222784 system_pods.go:89] "kube-ingress-dns-minikube" [3715ae8a-f6d7-4bfc-b92c-a3586056893e] Pending / Ready:ContainersNotReady (containers with unready status: [minikube-ingress-dns]) / ContainersReady:ContainersNotReady (containers with unready status: [minikube-ingress-dns])
	I0911 10:58:29.501359 2222784 system_pods.go:89] "kube-proxy-96wzg" [0655ce43-1406-45df-96a8-df0f9f378891] Running
	I0911 10:58:29.501367 2222784 system_pods.go:89] "kube-scheduler-addons-554886" [b2c82861-60fb-45da-8a82-d487c1c1301c] Running
	I0911 10:58:29.501374 2222784 system_pods.go:89] "metrics-server-7c66d45ddc-7krqz" [68915a10-f10d-4296-8a14-8c21f7f71a42] Running / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 10:58:29.501383 2222784 system_pods.go:89] "registry-proxy-lmsgk" [c3d6d669-7454-4529-b9ac-06abb4face91] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0911 10:58:29.501392 2222784 system_pods.go:89] "registry-t6754" [8531b6ac-003f-4a6d-aab4-67819497ab11] Pending / Ready:ContainersNotReady (containers with unready status: [registry]) / ContainersReady:ContainersNotReady (containers with unready status: [registry])
	I0911 10:58:29.501406 2222784 system_pods.go:89] "snapshot-controller-58dbcc7b99-2nql9" [e0c9b597-80fb-4724-8eef-0e970bed2638] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0911 10:58:29.501420 2222784 system_pods.go:89] "snapshot-controller-58dbcc7b99-9f7nb" [0aed7656-2dfe-4ac7-ad14-ab43a08a531f] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0911 10:58:29.501430 2222784 system_pods.go:89] "storage-provisioner" [a512a348-5ded-427c-886d-f1ea3077d8ad] Running
	I0911 10:58:29.501439 2222784 system_pods.go:89] "tiller-deploy-7b677967b9-dtz9n" [871f81ec-dd78-4aa4-89e9-5b99419aa8d5] Pending / Ready:ContainersNotReady (containers with unready status: [tiller]) / ContainersReady:ContainersNotReady (containers with unready status: [tiller])
	I0911 10:58:29.501448 2222784 system_pods.go:126] duration metric: took 211.204671ms to wait for k8s-apps to be running ...
	I0911 10:58:29.501456 2222784 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 10:58:29.501503 2222784 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 10:58:29.502304 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:29.504743 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:29.541148 2222784 system_svc.go:56] duration metric: took 39.676758ms WaitForService to wait for kubelet.
	I0911 10:58:29.541181 2222784 kubeadm.go:581] duration metric: took 42.289590939s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 10:58:29.541203 2222784 node_conditions.go:102] verifying NodePressure condition ...
	I0911 10:58:29.693461 2222784 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 10:58:29.693524 2222784 node_conditions.go:123] node cpu capacity is 2
	I0911 10:58:29.693537 2222784 node_conditions.go:105] duration metric: took 152.329102ms to run NodePressure ...
	I0911 10:58:29.693549 2222784 start.go:228] waiting for startup goroutines ...
	I0911 10:58:29.793593 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:29.978080 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:30.002479 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:30.004780 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:30.294161 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:30.477979 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:30.504947 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:30.506070 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:30.793269 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:30.983549 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:31.004392 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:31.006805 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:31.297236 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:31.481205 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:31.502879 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:31.506105 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:31.792896 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:31.980842 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:32.005072 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:32.005651 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:32.292417 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:32.478115 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:32.503903 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:32.504714 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:32.793172 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:32.983182 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:33.003821 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:33.006789 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:33.293041 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:33.476429 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:33.512778 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:33.527716 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:33.792961 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:33.977328 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:34.012931 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:34.013021 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:34.293032 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:34.486944 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:34.505218 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:34.506172 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:34.794211 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:35.004376 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:35.008955 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:35.009333 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:35.300562 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:35.476670 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:35.517252 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:35.521368 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:35.816780 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:35.982509 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:36.010647 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:36.010947 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:36.291941 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:36.480064 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:36.502805 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:36.503067 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:36.796300 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:36.978917 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:37.013632 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:37.014536 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:37.292617 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:37.478443 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:37.502918 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:37.504278 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:37.793530 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:37.977376 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:38.002021 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:38.002930 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:38.292483 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:38.480264 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:38.503175 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:38.504657 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:38.797288 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:38.976221 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:39.002538 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:39.002650 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:39.292426 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:39.476947 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:39.501995 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:39.504840 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:39.796146 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:39.976606 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:40.005642 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:40.006151 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:40.293525 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:40.476611 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:40.502590 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:40.503038 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:40.798077 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:40.978246 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:41.005255 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:41.005850 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:41.293029 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:41.479555 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:41.502758 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:41.503899 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:41.793294 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:41.977754 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:42.002177 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:42.003760 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:42.297137 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:42.478287 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:42.501747 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:42.502433 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:42.794090 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:42.977688 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:43.002051 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:43.003460 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:43.825217 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:43.828590 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:43.828941 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:43.829809 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:43.833317 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:43.977042 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:44.003823 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:44.004884 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:44.293394 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:44.486131 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:44.505203 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:44.507349 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:44.792672 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:44.979384 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:45.009570 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:45.009920 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:45.292771 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:45.482344 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:45.507706 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:45.510005 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:45.793567 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:46.128762 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:46.129250 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:46.129869 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:46.293243 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:46.477426 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:46.509691 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:46.512579 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:46.792585 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:46.977471 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:47.004827 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:47.008931 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:47.293670 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:47.476567 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:47.533365 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:47.564234 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:47.792362 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:47.976764 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:48.002012 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:48.003955 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:48.292368 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:48.476545 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:48.502173 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:48.502355 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:48.792930 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:48.983377 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:49.006469 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:49.007210 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:49.292495 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:49.476506 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:49.501753 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:49.503496 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:49.794704 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:49.983728 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:50.009481 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:50.011115 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:50.292701 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:50.477042 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:50.506272 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:50.506302 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:50.793004 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:50.979514 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:51.003402 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:51.003473 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:51.292987 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:51.481355 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:51.501918 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:51.503713 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:51.801531 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:51.977745 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:52.008823 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:52.011368 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:52.292244 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:52.476625 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:52.505407 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0911 10:58:52.506921 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:52.792204 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:52.976068 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:53.021505 2222784 kapi.go:107] duration metric: took 56.569029474s to wait for kubernetes.io/minikube-addons=registry ...
	I0911 10:58:53.028311 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:53.445075 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:53.477083 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:53.505638 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:53.792522 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:53.976089 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:54.004626 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:54.294638 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:54.476071 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:54.507622 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:54.793064 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:54.978086 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:55.043957 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:55.325601 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:55.477648 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:55.503077 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:55.792874 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:55.978193 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:56.002323 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:56.294506 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:56.476289 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:56.501898 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:56.792960 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:56.981556 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:57.001984 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:57.294188 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:57.477721 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:57.501617 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:57.792783 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:57.980420 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:58.002730 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:58.293201 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:58.482942 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:58.502805 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:59.026180 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:59.026521 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:59.027719 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:59.293070 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:59.477702 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:58:59.502056 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:58:59.792476 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:58:59.976725 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:00.005856 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:00.293244 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:00.478143 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:00.502850 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:00.792957 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:00.984934 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:01.019036 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:01.292140 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:01.493103 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:01.507925 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:02.074548 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:02.075290 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:02.080308 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:02.294010 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:02.477965 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:02.505401 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:02.793203 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:02.980572 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:03.005146 2222784 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0911 10:59:03.293313 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:03.476707 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:03.502038 2222784 kapi.go:107] duration metric: took 1m7.046645516s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0911 10:59:03.792441 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:03.978212 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:04.292436 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:04.476254 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:04.793962 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:04.976687 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:05.294289 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0911 10:59:05.530106 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:05.792985 2222784 kapi.go:107] duration metric: took 1m6.069455368s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0911 10:59:05.795155 2222784 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-554886 cluster.
	I0911 10:59:05.796953 2222784 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0911 10:59:05.798773 2222784 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0911 10:59:05.978955 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:06.478853 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:06.977520 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:07.478180 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:07.977906 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:08.476336 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:08.978846 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:09.688109 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:09.977490 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:10.476562 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:10.976600 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:11.477286 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:11.976566 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:12.477230 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:12.977224 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:13.480541 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:13.980462 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:14.477396 2222784 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0911 10:59:14.977193 2222784 kapi.go:107] duration metric: took 1m17.590068667s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0911 10:59:14.979290 2222784 out.go:177] * Enabled addons: default-storageclass, cloud-spanner, inspektor-gadget, ingress-dns, storage-provisioner, helm-tiller, metrics-server, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I0911 10:59:14.980804 2222784 addons.go:502] enable addons completed in 1m27.930558594s: enabled=[default-storageclass cloud-spanner inspektor-gadget ingress-dns storage-provisioner helm-tiller metrics-server volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I0911 10:59:14.980863 2222784 start.go:233] waiting for cluster config update ...
	I0911 10:59:14.980893 2222784 start.go:242] writing updated cluster config ...
	I0911 10:59:14.981239 2222784 ssh_runner.go:195] Run: rm -f paused
	I0911 10:59:15.039922 2222784 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 10:59:15.042190 2222784 out.go:177] * Done! kubectl is now configured to use "addons-554886" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 10:57:03 UTC, ends at Mon 2023-09-11 10:59:37 UTC. --
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.008209321Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&PodSandboxMetadata{Name:nginx,Uid:334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694429972061318122,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T10:59:31.717916057Z,kubernetes.io/config.source: api,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=0617c415-74ef-477e-ae10-b968bc24f332 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.009054973Z" level=debug msg="Request: &PodSandboxStatusRequest{PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=3675a3e7-fb46-4e9d-bb1e-6d13624bf432 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.009161579Z" level=debug msg="Response: &PodSandboxStatusResponse{Status:&PodSandboxStatus{Id:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&PodSandboxMetadata{Name:nginx,Uid:334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694429972061318122,Network:&PodSandboxNetworkStatus{Ip:10.244.0.23,AdditionalIps:[]*PodIP{},},Linux:&LinuxPodSandboxStatus{Namespaces:&Namespace{Options:&NamespaceOption{Network:POD,Pid:CONTAINER,Ipc:POD,TargetId:,},},},Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,run: nginx,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T10:59:31.717916057Z,kubernetes.io/config.source: api,},RuntimeHandler:,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=3675a3e7-fb46-4e9d-
bb1e-6d13624bf432 name=/runtime.v1.RuntimeService/PodSandboxStatus
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.016823688Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},},}" file="go-grpc-middleware/chain.go:25" id=31379ab4-f86a-4b84-bf41-ef612d56c585 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.016917381Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=31379ab4-f86a-4b84-bf41-ef612d56c585 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.017003500Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernetes.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=31379ab4-f86a-4b84-bf41-ef612d56c585 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.018237767Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,Verbose:false,}" file="go-grpc-middleware/chain.go:25" id=aec05b38-a11e-4126-966b-404dc87c914d name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.018376990Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1694429976040921040,StartedAt:1694429976153467929,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/library/nginx:alpine,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernetes.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/google-app-creds.json,HostPath:/var/lib/minikube/google_application_credentials.json,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/containers/nginx/0c64203f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/volumes/kubernetes.io~projected/kube-api-access-lq6h5,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/default_nginx_334139c1-49e6-47ff-b89b-d4
b0bbe9e4dc/nginx/0.log,},Info:map[string]string{},}" file="go-grpc-middleware/chain.go:25" id=aec05b38-a11e-4126-966b-404dc87c914d name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.035606884Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0211b482-ea22-4c55-a4fc-1c699f2df31a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.035907798Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0211b482-ea22-4c55-a4fc-1c699f2df31a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.036527876Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernetes.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bf
d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0702cfb6d65fef0f14b301cca9feff05daf6a9e0813b52eadc1b1b8ef409937,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1694429953734357460,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 39573b80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3838108e529c32521906878e48041f18e14f538792f6b564bea29c8f5f1d4504,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1694429951772193294,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.na
me: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: c9fea948,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1c70c518e405719a19b0b94f698cab9ec447b66872c2bef906ec0d936a7b96,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1694429949757885511,Labels:map[string]string{io.kubernetes.contai
ner.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 50ca06f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4abc5228c1761699df567b9c63bddf4739c54ec77a28110ce1953c9a52de8f7,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1694429948008650900,Labe
ls:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: adcbb14b,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73047ee4ad1faeb5fffd2eb4b5392bcc950e26278c291c3b0c4675f5260ed352,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.i
o/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1694429946357311282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 2940d570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Anno
tations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7,PodSandboxId:ac87beda7aea4be34d7e225a8a164553fbe19349a1b7dab40006b78b527068ef,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s
.io/ingress-nginx/controller@sha256:74834d3d25b336b62cabeb8bf7f1d788706e2cf1cfd64022de4137ade8881ff2,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:74834d3d25b336b62cabeb8bf7f1d788706e2cf1cfd64022de4137ade8881ff2,State:CONTAINER_RUNNING,CreatedAt:1694429943044433372,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-798b8b85d7-g974z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 02937856-34c0-4a63-9601-d8747d12123f,},Annotations:map[string]string{io.kubernetes.container.hash: f31c5e30,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6eb3a3928b0a44bc004813efa2bddc5ba5c2e723b31b349bbf8e5760bb790338,PodSandboxId:9573562fe835d32beb2e20f41dbb0234d569868d8d89ea4f4092dd0b19ae6eb8,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1694429934088001852,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-2nql9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c9b597-80fb-4724-8eef-0e970bed2638,},Annotations:map[string]string{io.kubern
etes.container.hash: b6d69ef2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd2d2364045caee5d720a3ba63a93d34884861a6d0758d0e94623f4a82c4d27,PodSandboxId:0cc80513b6b8a6a0c43f5c98ef1ae438edf1fbee05c28b3427251aab5ffa2721,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1694429926398061168,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-9f7nb,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 0aed7656-2dfe-4ac7-ad14-ab43a08a531f,},Annotations:map[string]string{io.kubernetes.container.hash: c1c56a31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14737f31f63267e0d53d26547f5b4ca8b6c59f7b548264da8785feea503ae56,PodSandboxId:49e27a6ca857a560be0a77f571a69019bdae8512f1484b539a79fc58d4f07bbd,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1694429920377264212,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 245e9000-d196-429f-bf8a-ecced1fb4a71,},Annotations:map[string]string{io.kubernetes.container.hash: bcbdd8c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afea6bbda7fee9ef197e161d177f305b883956aa2c6c5f200d0ef8e47e5c91d7,PodSandboxId:34a511639f217bbb6b0ae452bc1d1b32786c80b4e0182532887c23fa3c7f775b,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1694429918617144226,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: c
si-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c4130b-1a92-424a-a665-557da4d3f75b,},Annotations:map[string]string{io.kubernetes.container.hash: 7edc6ba2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087,PodSandboxId:5ea995cf922569c1e9bd262049382d508dcddd7ac0315397a1f5d5a3e440c00a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1694429914842645175,Labels:map[string]string{io.kubernetes.container
.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3715ae8a-f6d7-4bfc-b92c-a3586056893e,},Annotations:map[string]string{io.kubernetes.container.hash: 1a3ee56e,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f634fd9cd05f0fa392285b1139d3ae4e81a90eadbf9025c7de0c8b4bd926b7,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-st
orage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1694429907056965831,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd584ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc410122
7dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image
:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image
:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,At
tempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},
Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd
173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d84656
9c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a
78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df
51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f
4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0211b482-ea22-4c55-a4fc-1c699f2df31a name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.049053209Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,Verbose:true,}" file="go-grpc-middleware/chain.go:25" id=c3763f11-ba75-430a-9669-ab5d0cb40822 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.050663409Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},State:CONTAINER_RUNNING,CreatedAt:1694429976040921040,StartedAt:1694429976153467929,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:docker.io/library/nginx:alpine,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernetes.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/google-app-creds.json,HostPath:/var/lib/minikube/google_application_credentials.json,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/containers/nginx/0c64203f,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},&Mount{ContainerPath:/var/run/secrets/kubernetes.io/serviceaccount,HostPath:/var/lib/kubelet/pods/334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/volumes/kubernetes.io~projected/kube-api-access-lq6h5,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,},},LogPath:/var/log/pods/default_nginx_334139c1-49e6-47ff-b89b-d4
b0bbe9e4dc/nginx/0.log,},Info:map[string]string{info: {\"sandboxID\":\"1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5\",\"pid\":7130,\"runtimeSpec\":{\"ociVersion\":\"1.0.2-dev\",\"process\":{\"user\":{\"uid\":0,\"gid\":0,\"additionalGids\":[0,1,2,3,4,6,10,11,20,26,27]},\"args\":[\"/docker-entrypoint.sh\",\"nginx\",\"-g\",\"daemon off;\"],\"env\":[\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\"TERM=xterm\",\"HOSTNAME=nginx\",\"GOOGLE_APPLICATION_CREDENTIALS=/google-app-creds.json\",\"PROJECT_ID=this_is_fake\",\"GCP_PROJECT=this_is_fake\",\"GCLOUD_PROJECT=this_is_fake\",\"GOOGLE_CLOUD_PROJECT=this_is_fake\",\"CLOUDSDK_CORE_PROJECT=this_is_fake\",\"NGINX_PORT_80_TCP_ADDR=10.103.142.192\",\"KUBERNETES_SERVICE_PORT=443\",\"KUBERNETES_PORT_443_TCP=tcp://10.96.0.1:443\",\"KUBERNETES_PORT_443_TCP_PORT=443\",\"NGINX_SERVICE_HOST=10.103.142.192\",\"KUBERNETES_SERVICE_HOST=10.96.0.1\",\"NGINX_PORT=tcp://10.103.142.192:80\",\"NGINX_PORT_80_TCP_PORT=80\",\"KUBERNETES_SERVICE_POR
T_HTTPS=443\",\"KUBERNETES_PORT_443_TCP_ADDR=10.96.0.1\",\"NGINX_SERVICE_PORT=80\",\"NGINX_PORT_80_TCP_PROTO=tcp\",\"KUBERNETES_PORT=tcp://10.96.0.1:443\",\"KUBERNETES_PORT_443_TCP_PROTO=tcp\",\"NGINX_PORT_80_TCP=tcp://10.103.142.192:80\",\"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin\",\"NGINX_VERSION=1.25.2\",\"PKG_RELEASE=1\",\"NJS_VERSION=0.8.0\"],\"cwd\":\"/\",\"capabilities\":{\"bounding\":[\"CAP_CHOWN\",\"CAP_DAC_OVERRIDE\",\"CAP_FSETID\",\"CAP_FOWNER\",\"CAP_SETGID\",\"CAP_SETUID\",\"CAP_SETPCAP\",\"CAP_NET_BIND_SERVICE\",\"CAP_KILL\"],\"effective\":[\"CAP_CHOWN\",\"CAP_DAC_OVERRIDE\",\"CAP_FSETID\",\"CAP_FOWNER\",\"CAP_SETGID\",\"CAP_SETUID\",\"CAP_SETPCAP\",\"CAP_NET_BIND_SERVICE\",\"CAP_KILL\"],\"permitted\":[\"CAP_CHOWN\",\"CAP_DAC_OVERRIDE\",\"CAP_FSETID\",\"CAP_FOWNER\",\"CAP_SETGID\",\"CAP_SETUID\",\"CAP_SETPCAP\",\"CAP_NET_BIND_SERVICE\",\"CAP_KILL\"]},\"oomScoreAdj\":1000},\"root\":{\"path\":\"/var/lib/containers/storage/overlay/3e36bdd43fb0be3fda3f8242f3d6f895a8672b2b90e
5477af4e0cb5bb290fda8/merged\"},\"hostname\":\"nginx\",\"mounts\":[{\"destination\":\"/proc\",\"type\":\"proc\",\"source\":\"proc\",\"options\":[\"nosuid\",\"noexec\",\"nodev\"]},{\"destination\":\"/dev\",\"type\":\"tmpfs\",\"source\":\"tmpfs\",\"options\":[\"nosuid\",\"strictatime\",\"mode=755\",\"size=65536k\"]},{\"destination\":\"/dev/pts\",\"type\":\"devpts\",\"source\":\"devpts\",\"options\":[\"nosuid\",\"noexec\",\"newinstance\",\"ptmxmode=0666\",\"mode=0620\",\"gid=5\"]},{\"destination\":\"/dev/mqueue\",\"type\":\"mqueue\",\"source\":\"mqueue\",\"options\":[\"nosuid\",\"noexec\",\"nodev\"]},{\"destination\":\"/sys\",\"type\":\"sysfs\",\"source\":\"sysfs\",\"options\":[\"nosuid\",\"noexec\",\"nodev\",\"ro\"]},{\"destination\":\"/sys/fs/cgroup\",\"type\":\"cgroup\",\"source\":\"cgroup\",\"options\":[\"nosuid\",\"noexec\",\"nodev\",\"relatime\",\"ro\"]},{\"destination\":\"/dev/shm\",\"type\":\"bind\",\"source\":\"/var/run/containers/storage/overlay-containers/1c0b637d114ac853ccaaa23270fb68345b44b71e9a3971
6ba562769e5e526ee5/userdata/shm\",\"options\":[\"rw\",\"bind\"]},{\"destination\":\"/etc/resolv.conf\",\"type\":\"bind\",\"source\":\"/var/run/containers/storage/overlay-containers/1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5/userdata/resolv.conf\",\"options\":[\"rw\",\"bind\",\"nodev\",\"nosuid\",\"noexec\"]},{\"destination\":\"/etc/hostname\",\"type\":\"bind\",\"source\":\"/var/run/containers/storage/overlay-containers/1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5/userdata/hostname\",\"options\":[\"rw\",\"bind\"]},{\"destination\":\"/run/.containerenv\",\"type\":\"bind\",\"source\":\"/var/lib/containers/storage/overlay-containers/1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5/userdata/.containerenv\",\"options\":[\"rw\",\"bind\"]},{\"destination\":\"/google-app-creds.json\",\"type\":\"bind\",\"source\":\"/var/lib/minikube/google_application_credentials.json\",\"options\":[\"ro\",\"rbind\",\"rprivate\",\"bind\"]},{\"destination\":\"/etc/hosts\",\"type\
":\"bind\",\"source\":\"/var/lib/kubelet/pods/334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/etc-hosts\",\"options\":[\"rw\",\"rbind\",\"rprivate\",\"bind\"]},{\"destination\":\"/dev/termination-log\",\"type\":\"bind\",\"source\":\"/var/lib/kubelet/pods/334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/containers/nginx/0c64203f\",\"options\":[\"rw\",\"rbind\",\"rprivate\",\"bind\"]},{\"destination\":\"/var/run/secrets/kubernetes.io/serviceaccount\",\"type\":\"bind\",\"source\":\"/var/lib/kubelet/pods/334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/volumes/kubernetes.io~projected/kube-api-access-lq6h5\",\"options\":[\"ro\",\"rbind\",\"rprivate\",\"bind\"]}],\"annotations\":{\"io.kubernetes.cri-o.Metadata\":\"{\\\"name\\\":\\\"nginx\\\"}\",\"io.kubernetes.cri-o.Volumes\":\"[{\\\"container_path\\\":\\\"/google-app-creds.json\\\",\\\"host_path\\\":\\\"/var/lib/minikube/google_application_credentials.json\\\",\\\"readonly\\\":true},{\\\"container_path\\\":\\\"/etc/hosts\\\",\\\"host_path\\\":\\\"/var/lib/kubelet/pods/334139c1-49e6-47ff-b89b-d4b0bb
e9e4dc/etc-hosts\\\",\\\"readonly\\\":false},{\\\"container_path\\\":\\\"/dev/termination-log\\\",\\\"host_path\\\":\\\"/var/lib/kubelet/pods/334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/containers/nginx/0c64203f\\\",\\\"readonly\\\":false},{\\\"container_path\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"host_path\\\":\\\"/var/lib/kubelet/pods/334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/volumes/kubernetes.io~projected/kube-api-access-lq6h5\\\",\\\"readonly\\\":true}]\",\"io.kubernetes.container.terminationMessagePath\":\"/dev/termination-log\",\"io.kubernetes.pod.terminationGracePeriod\":\"30\",\"io.kubernetes.cri-o.SandboxName\":\"k8s_nginx_default_334139c1-49e6-47ff-b89b-d4b0bbe9e4dc_0\",\"io.kubernetes.cri-o.Annotations\":\"{\\\"io.kubernetes.container.hash\\\":\\\"aec7e2c9\\\",\\\"io.kubernetes.container.ports\\\":\\\"[{\\\\\\\"containerPort\\\\\\\":80,\\\\\\\"protocol\\\\\\\":\\\\\\\"TCP\\\\\\\"}]\\\",\\\"io.kubernetes.container.restartCount\\\":\\\"0\\\",\\\"io.kubernetes.container.terminationMessageP
ath\\\":\\\"/dev/termination-log\\\",\\\"io.kubernetes.container.terminationMessagePolicy\\\":\\\"File\\\",\\\"io.kubernetes.pod.terminationGracePeriod\\\":\\\"30\\\"}\",\"io.kubernetes.container.terminationMessagePolicy\":\"File\",\"io.kubernetes.container.name\":\"nginx\",\"io.kubernetes.cri-o.LogPath\":\"/var/log/pods/default_nginx_334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/nginx/0.log\",\"io.kubernetes.cri-o.TTY\":\"false\",\"io.kubernetes.cri-o.Created\":\"2023-09-11T10:59:35.743807717Z\",\"io.kubernetes.cri-o.ImageRef\":\"433dbc17191a7830a9db6454bcc23414ad36caecedab39d1e51d41083ab1d629\",\"io.kubernetes.cri-o.SandboxID\":\"1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5\",\"io.kubernetes.cri-o.ContainerType\":\"container\",\"org.opencontainers.image.stopSignal\":\"SIGQUIT\",\"io.kubernetes.cri-o.Name\":\"k8s_nginx_nginx_default_334139c1-49e6-47ff-b89b-d4b0bbe9e4dc_0\",\"io.kubernetes.cri-o.Stdin\":\"false\",\"io.kubernetes.cri-o.MountPoint\":\"/var/lib/containers/storage/overlay/3e36bdd43fb0b
e3fda3f8242f3d6f895a8672b2b90e5477af4e0cb5bb290fda8/merged\",\"io.kubernetes.container.restartCount\":\"0\",\"io.kubernetes.container.hash\":\"aec7e2c9\",\"io.kubernetes.container.ports\":\"[{\\\"containerPort\\\":80,\\\"protocol\\\":\\\"TCP\\\"}]\",\"kubernetes.io/config.source\":\"api\",\"io.kubernetes.pod.namespace\":\"default\",\"io.kubernetes.cri-o.IP.0\":\"10.244.0.23\",\"io.kubernetes.cri-o.Image\":\"docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70\",\"io.kubernetes.cri-o.ContainerID\":\"e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4\",\"io.kubernetes.cri-o.ResolvPath\":\"/var/run/containers/storage/overlay-containers/1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5/userdata/resolv.conf\",\"io.container.manager\":\"cri-o\",\"io.kubernetes.cri-o.SeccompProfilePath\":\"\",\"io.kubernetes.pod.name\":\"nginx\",\"kubernetes.io/config.seen\":\"2023-09-11T10:59:31.717916057Z\",\"io.kubernetes.cri-o.ImageName\":\"docker.io/library
/nginx:alpine\",\"io.kubernetes.cri-o.StdinOnce\":\"false\",\"io.kubernetes.cri-o.Labels\":\"{\\\"io.kubernetes.container.name\\\":\\\"nginx\\\",\\\"io.kubernetes.pod.name\\\":\\\"nginx\\\",\\\"io.kubernetes.pod.namespace\\\":\\\"default\\\",\\\"io.kubernetes.pod.uid\\\":\\\"334139c1-49e6-47ff-b89b-d4b0bbe9e4dc\\\"}\",\"io.kubernetes.pod.uid\":\"334139c1-49e6-47ff-b89b-d4b0bbe9e4dc\"},\"linux\":{\"resources\":{\"devices\":[{\"allow\":false,\"access\":\"rwm\"}],\"cpu\":{\"shares\":2,\"quota\":0,\"period\":100000},\"pids\":{\"limit\":1024},\"hugepageLimits\":[{\"pageSize\":\"2MB\",\"limit\":0}]},\"cgroupsPath\":\"/kubepods/besteffort/pod334139c1-49e6-47ff-b89b-d4b0bbe9e4dc/crio-e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4\",\"namespaces\":[{\"type\":\"pid\"},{\"type\":\"network\",\"path\":\"/var/run/netns/d4551fab-f345-4da1-8414-c8a607b41fbf\"},{\"type\":\"ipc\",\"path\":\"/var/run/ipcns/d4551fab-f345-4da1-8414-c8a607b41fbf\"},{\"type\":\"uts\",\"path\":\"/var/run/utsns/d4551fab-f345-4da1-84
14-c8a607b41fbf\"},{\"type\":\"mount\"}],\"maskedPaths\":[\"/proc/asound\",\"/proc/acpi\",\"/proc/kcore\",\"/proc/keys\",\"/proc/latency_stats\",\"/proc/timer_list\",\"/proc/timer_stats\",\"/proc/sched_debug\",\"/proc/scsi\",\"/sys/firmware\"],\"readonlyPaths\":[\"/proc/bus\",\"/proc/fs\",\"/proc/irq\",\"/proc/sys\",\"/proc/sysrq-trigger\"]}},\"privileged\":false},},}" file="go-grpc-middleware/chain.go:25" id=c3763f11-ba75-430a-9669-ab5d0cb40822 name=/runtime.v1.RuntimeService/ContainerStatus
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.092832550Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1a9c6d89-2460-4150-ac27-3114a788441b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.092930507Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1a9c6d89-2460-4150-ac27-3114a788441b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.093456624Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernetes.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bf
d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0702cfb6d65fef0f14b301cca9feff05daf6a9e0813b52eadc1b1b8ef409937,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1694429953734357460,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 39573b80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3838108e529c32521906878e48041f18e14f538792f6b564bea29c8f5f1d4504,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1694429951772193294,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.na
me: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: c9fea948,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1c70c518e405719a19b0b94f698cab9ec447b66872c2bef906ec0d936a7b96,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1694429949757885511,Labels:map[string]string{io.kubernetes.contai
ner.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 50ca06f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4abc5228c1761699df567b9c63bddf4739c54ec77a28110ce1953c9a52de8f7,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1694429948008650900,Labe
ls:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: adcbb14b,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73047ee4ad1faeb5fffd2eb4b5392bcc950e26278c291c3b0c4675f5260ed352,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.i
o/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1694429946357311282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 2940d570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Anno
tations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7,PodSandboxId:ac87beda7aea4be34d7e225a8a164553fbe19349a1b7dab40006b78b527068ef,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s
.io/ingress-nginx/controller@sha256:74834d3d25b336b62cabeb8bf7f1d788706e2cf1cfd64022de4137ade8881ff2,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:74834d3d25b336b62cabeb8bf7f1d788706e2cf1cfd64022de4137ade8881ff2,State:CONTAINER_RUNNING,CreatedAt:1694429943044433372,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-798b8b85d7-g974z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 02937856-34c0-4a63-9601-d8747d12123f,},Annotations:map[string]string{io.kubernetes.container.hash: f31c5e30,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6eb3a3928b0a44bc004813efa2bddc5ba5c2e723b31b349bbf8e5760bb790338,PodSandboxId:9573562fe835d32beb2e20f41dbb0234d569868d8d89ea4f4092dd0b19ae6eb8,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1694429934088001852,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-2nql9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c9b597-80fb-4724-8eef-0e970bed2638,},Annotations:map[string]string{io.kubern
etes.container.hash: b6d69ef2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd2d2364045caee5d720a3ba63a93d34884861a6d0758d0e94623f4a82c4d27,PodSandboxId:0cc80513b6b8a6a0c43f5c98ef1ae438edf1fbee05c28b3427251aab5ffa2721,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1694429926398061168,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-9f7nb,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 0aed7656-2dfe-4ac7-ad14-ab43a08a531f,},Annotations:map[string]string{io.kubernetes.container.hash: c1c56a31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14737f31f63267e0d53d26547f5b4ca8b6c59f7b548264da8785feea503ae56,PodSandboxId:49e27a6ca857a560be0a77f571a69019bdae8512f1484b539a79fc58d4f07bbd,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1694429920377264212,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 245e9000-d196-429f-bf8a-ecced1fb4a71,},Annotations:map[string]string{io.kubernetes.container.hash: bcbdd8c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afea6bbda7fee9ef197e161d177f305b883956aa2c6c5f200d0ef8e47e5c91d7,PodSandboxId:34a511639f217bbb6b0ae452bc1d1b32786c80b4e0182532887c23fa3c7f775b,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1694429918617144226,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: c
si-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c4130b-1a92-424a-a665-557da4d3f75b,},Annotations:map[string]string{io.kubernetes.container.hash: 7edc6ba2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087,PodSandboxId:5ea995cf922569c1e9bd262049382d508dcddd7ac0315397a1f5d5a3e440c00a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1694429914842645175,Labels:map[string]string{io.kubernetes.container
.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3715ae8a-f6d7-4bfc-b92c-a3586056893e,},Annotations:map[string]string{io.kubernetes.container.hash: 1a3ee56e,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f634fd9cd05f0fa392285b1139d3ae4e81a90eadbf9025c7de0c8b4bd926b7,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-st
orage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1694429907056965831,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd584ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc410122
7dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image
:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image
:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,At
tempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},
Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd
173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d84656
9c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a
78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df
51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f
4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1a9c6d89-2460-4150-ac27-3114a788441b name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.126671580Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=08eeb469-3523-4451-bb04-a7eb6e2cf6e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.126846179Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=08eeb469-3523-4451-bb04-a7eb6e2cf6e0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.127345399Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernetes.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bf
d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0702cfb6d65fef0f14b301cca9feff05daf6a9e0813b52eadc1b1b8ef409937,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1694429953734357460,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 39573b80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3838108e529c32521906878e48041f18e14f538792f6b564bea29c8f5f1d4504,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1694429951772193294,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.na
me: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: c9fea948,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1c70c518e405719a19b0b94f698cab9ec447b66872c2bef906ec0d936a7b96,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1694429949757885511,Labels:map[string]string{io.kubernetes.contai
ner.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 50ca06f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4abc5228c1761699df567b9c63bddf4739c54ec77a28110ce1953c9a52de8f7,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1694429948008650900,Labe
ls:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: adcbb14b,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73047ee4ad1faeb5fffd2eb4b5392bcc950e26278c291c3b0c4675f5260ed352,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.i
o/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1694429946357311282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 2940d570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Anno
tations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7,PodSandboxId:ac87beda7aea4be34d7e225a8a164553fbe19349a1b7dab40006b78b527068ef,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s
.io/ingress-nginx/controller@sha256:74834d3d25b336b62cabeb8bf7f1d788706e2cf1cfd64022de4137ade8881ff2,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:74834d3d25b336b62cabeb8bf7f1d788706e2cf1cfd64022de4137ade8881ff2,State:CONTAINER_RUNNING,CreatedAt:1694429943044433372,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-798b8b85d7-g974z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 02937856-34c0-4a63-9601-d8747d12123f,},Annotations:map[string]string{io.kubernetes.container.hash: f31c5e30,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6eb3a3928b0a44bc004813efa2bddc5ba5c2e723b31b349bbf8e5760bb790338,PodSandboxId:9573562fe835d32beb2e20f41dbb0234d569868d8d89ea4f4092dd0b19ae6eb8,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1694429934088001852,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-2nql9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c9b597-80fb-4724-8eef-0e970bed2638,},Annotations:map[string]string{io.kubern
etes.container.hash: b6d69ef2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd2d2364045caee5d720a3ba63a93d34884861a6d0758d0e94623f4a82c4d27,PodSandboxId:0cc80513b6b8a6a0c43f5c98ef1ae438edf1fbee05c28b3427251aab5ffa2721,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1694429926398061168,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-9f7nb,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 0aed7656-2dfe-4ac7-ad14-ab43a08a531f,},Annotations:map[string]string{io.kubernetes.container.hash: c1c56a31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14737f31f63267e0d53d26547f5b4ca8b6c59f7b548264da8785feea503ae56,PodSandboxId:49e27a6ca857a560be0a77f571a69019bdae8512f1484b539a79fc58d4f07bbd,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1694429920377264212,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 245e9000-d196-429f-bf8a-ecced1fb4a71,},Annotations:map[string]string{io.kubernetes.container.hash: bcbdd8c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afea6bbda7fee9ef197e161d177f305b883956aa2c6c5f200d0ef8e47e5c91d7,PodSandboxId:34a511639f217bbb6b0ae452bc1d1b32786c80b4e0182532887c23fa3c7f775b,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1694429918617144226,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: c
si-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c4130b-1a92-424a-a665-557da4d3f75b,},Annotations:map[string]string{io.kubernetes.container.hash: 7edc6ba2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087,PodSandboxId:5ea995cf922569c1e9bd262049382d508dcddd7ac0315397a1f5d5a3e440c00a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1694429914842645175,Labels:map[string]string{io.kubernetes.container
.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3715ae8a-f6d7-4bfc-b92c-a3586056893e,},Annotations:map[string]string{io.kubernetes.container.hash: 1a3ee56e,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f634fd9cd05f0fa392285b1139d3ae4e81a90eadbf9025c7de0c8b4bd926b7,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-st
orage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1694429907056965831,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd584ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc410122
7dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image
:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image
:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,At
tempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},
Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd
173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d84656
9c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a
78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df
51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f
4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=08eeb469-3523-4451-bb04-a7eb6e2cf6e0 name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.167960342Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e128aac9-b7d3-4fcf-9fe0-05487b228ab0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.168035538Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e128aac9-b7d3-4fcf-9fe0-05487b228ab0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.168590837Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernetes.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bf
d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0702cfb6d65fef0f14b301cca9feff05daf6a9e0813b52eadc1b1b8ef409937,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1694429953734357460,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 39573b80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3838108e529c32521906878e48041f18e14f538792f6b564bea29c8f5f1d4504,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1694429951772193294,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.na
me: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: c9fea948,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1c70c518e405719a19b0b94f698cab9ec447b66872c2bef906ec0d936a7b96,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1694429949757885511,Labels:map[string]string{io.kubernetes.contai
ner.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 50ca06f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4abc5228c1761699df567b9c63bddf4739c54ec77a28110ce1953c9a52de8f7,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1694429948008650900,Labe
ls:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: adcbb14b,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73047ee4ad1faeb5fffd2eb4b5392bcc950e26278c291c3b0c4675f5260ed352,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.i
o/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1694429946357311282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 2940d570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Anno
tations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7,PodSandboxId:ac87beda7aea4be34d7e225a8a164553fbe19349a1b7dab40006b78b527068ef,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s
.io/ingress-nginx/controller@sha256:74834d3d25b336b62cabeb8bf7f1d788706e2cf1cfd64022de4137ade8881ff2,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:74834d3d25b336b62cabeb8bf7f1d788706e2cf1cfd64022de4137ade8881ff2,State:CONTAINER_RUNNING,CreatedAt:1694429943044433372,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-798b8b85d7-g974z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 02937856-34c0-4a63-9601-d8747d12123f,},Annotations:map[string]string{io.kubernetes.container.hash: f31c5e30,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6eb3a3928b0a44bc004813efa2bddc5ba5c2e723b31b349bbf8e5760bb790338,PodSandboxId:9573562fe835d32beb2e20f41dbb0234d569868d8d89ea4f4092dd0b19ae6eb8,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1694429934088001852,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-2nql9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c9b597-80fb-4724-8eef-0e970bed2638,},Annotations:map[string]string{io.kubern
etes.container.hash: b6d69ef2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd2d2364045caee5d720a3ba63a93d34884861a6d0758d0e94623f4a82c4d27,PodSandboxId:0cc80513b6b8a6a0c43f5c98ef1ae438edf1fbee05c28b3427251aab5ffa2721,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1694429926398061168,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-9f7nb,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 0aed7656-2dfe-4ac7-ad14-ab43a08a531f,},Annotations:map[string]string{io.kubernetes.container.hash: c1c56a31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14737f31f63267e0d53d26547f5b4ca8b6c59f7b548264da8785feea503ae56,PodSandboxId:49e27a6ca857a560be0a77f571a69019bdae8512f1484b539a79fc58d4f07bbd,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1694429920377264212,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 245e9000-d196-429f-bf8a-ecced1fb4a71,},Annotations:map[string]string{io.kubernetes.container.hash: bcbdd8c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afea6bbda7fee9ef197e161d177f305b883956aa2c6c5f200d0ef8e47e5c91d7,PodSandboxId:34a511639f217bbb6b0ae452bc1d1b32786c80b4e0182532887c23fa3c7f775b,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1694429918617144226,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: c
si-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c4130b-1a92-424a-a665-557da4d3f75b,},Annotations:map[string]string{io.kubernetes.container.hash: 7edc6ba2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087,PodSandboxId:5ea995cf922569c1e9bd262049382d508dcddd7ac0315397a1f5d5a3e440c00a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1694429914842645175,Labels:map[string]string{io.kubernetes.container
.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3715ae8a-f6d7-4bfc-b92c-a3586056893e,},Annotations:map[string]string{io.kubernetes.container.hash: 1a3ee56e,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f634fd9cd05f0fa392285b1139d3ae4e81a90eadbf9025c7de0c8b4bd926b7,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-st
orage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1694429907056965831,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd584ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc410122
7dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image
:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image
:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,At
tempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},
Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd
173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d84656
9c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a
78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df
51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f
4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e128aac9-b7d3-4fcf-9fe0-05487b228ab0 name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.203605622Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=acf8078c-7bfc-449e-8cef-4938706df707 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.203679677Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=acf8078c-7bfc-449e-8cef-4938706df707 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 10:59:37 addons-554886 crio[718]: time="2023-09-11 10:59:37.204303149Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e6f1300f92f924eb38f918cbbbaba8b7742dc355e29060610d6f9f1cae5485a4,PodSandboxId:1c0b637d114ac853ccaaa23270fb68345b44b71e9a39716ba562769e5e526ee5,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694429975740900865,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 334139c1-49e6-47ff-b89b-d4b0bbe9e4dc,},Annotations:map[string]string{io.kubernetes.container.hash: aec7e2c9,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.ku
bernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a04896a1d542387cdce1f69ec00638478e2ec1626f413cdead2ea5e5cbf4bebd,PodSandboxId:b514d60c86df5442f755460c026c14c4cce2e1099cdd51c0cbf07d718fc7a56a,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,Annotations:map[string]string{},},ImageRef:ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552,State:CONTAINER_RUNNING,CreatedAt:1694429963760425761,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-699c48fb74-8w9jw,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: ad3210cf-3754-41f6-89a2-f8128558feb4,},Annotations:map[string]string{io.kubernetes.container.hash: 12f74bf
d,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c0702cfb6d65fef0f14b301cca9feff05daf6a9e0813b52eadc1b1b8ef409937,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-snapshotter,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f,State:CONTAINER_RUNNING,CreatedAt:1694429953734357460,Labels:map[string]string{io.kubernetes.container.name: csi-snapshotter,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.name
space: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 39573b80,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3838108e529c32521906878e48041f18e14f538792f6b564bea29c8f5f1d4504,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-provisioner,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7,State:CONTAINER_RUNNING,CreatedAt:1694429951772193294,Labels:map[string]string{io.kubernetes.container.name: csi-provisioner,io.kubernetes.pod.na
me: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: c9fea948,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c1c70c518e405719a19b0b94f698cab9ec447b66872c2bef906ec0d936a7b96,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:liveness-probe,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6,State:CONTAINER_RUNNING,CreatedAt:1694429949757885511,Labels:map[string]string{io.kubernetes.contai
ner.name: liveness-probe,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 50ca06f6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d4abc5228c1761699df567b9c63bddf4739c54ec77a28110ce1953c9a52de8f7,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:hostpath,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11,State:CONTAINER_RUNNING,CreatedAt:1694429948008650900,Labe
ls:map[string]string{io.kubernetes.container.name: hostpath,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: adcbb14b,io.kubernetes.container.ports: [{\"name\":\"healthz\",\"containerPort\":9898,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73047ee4ad1faeb5fffd2eb4b5392bcc950e26278c291c3b0c4675f5260ed352,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:node-driver-registrar,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,Annotations:map[string]string{},},ImageRef:registry.k8s.i
o/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc,State:CONTAINER_RUNNING,CreatedAt:1694429946357311282,Labels:map[string]string{io.kubernetes.container.name: node-driver-registrar,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 2940d570,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd,PodSandboxId:9740261cfaa9f374ea7df10168bb21b6faffbb3ea136a5dba1e5e2f65faaf056,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,Anno
tations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06,State:CONTAINER_RUNNING,CreatedAt:1694429944862845586,Labels:map[string]string{io.kubernetes.container.name: gcp-auth,io.kubernetes.pod.name: gcp-auth-d4c87556c-dc54c,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: bf0580b8-8a02-4e4f-a81b-78336400127f,},Annotations:map[string]string{io.kubernetes.container.hash: a9a159e1,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9ec9fe29bd730f779b2f36cc309c70bd6a566d1a21cb7d58667f70843bf60d7,PodSandboxId:ac87beda7aea4be34d7e225a8a164553fbe19349a1b7dab40006b78b527068ef,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s
.io/ingress-nginx/controller@sha256:74834d3d25b336b62cabeb8bf7f1d788706e2cf1cfd64022de4137ade8881ff2,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:74834d3d25b336b62cabeb8bf7f1d788706e2cf1cfd64022de4137ade8881ff2,State:CONTAINER_RUNNING,CreatedAt:1694429943044433372,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-798b8b85d7-g974z,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 02937856-34c0-4a63-9601-d8747d12123f,},Annotations:map[string]string{io.kubernetes.container.hash: f31c5e30,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.containe
r.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:6eb3a3928b0a44bc004813efa2bddc5ba5c2e723b31b349bbf8e5760bb790338,PodSandboxId:9573562fe835d32beb2e20f41dbb0234d569868d8d89ea4f4092dd0b19ae6eb8,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1694429934088001852,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-2nql9,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e0c9b597-80fb-4724-8eef-0e970bed2638,},Annotations:map[string]string{io.kubern
etes.container.hash: b6d69ef2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bdd2d2364045caee5d720a3ba63a93d34884861a6d0758d0e94623f4a82c4d27,PodSandboxId:0cc80513b6b8a6a0c43f5c98ef1ae438edf1fbee05c28b3427251aab5ffa2721,Metadata:&ContainerMetadata{Name:volume-snapshot-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922,State:CONTAINER_RUNNING,CreatedAt:1694429926398061168,Labels:map[string]string{io.kubernetes.container.name: volume-snapshot-controller,io.kubernetes.pod.name: snapshot-controller-58dbcc7b99-9f7nb,io.kubernetes.pod.namespace: kube-system,io.kuber
netes.pod.uid: 0aed7656-2dfe-4ac7-ad14-ab43a08a531f,},Annotations:map[string]string{io.kubernetes.container.hash: c1c56a31,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f14737f31f63267e0d53d26547f5b4ca8b6c59f7b548264da8785feea503ae56,PodSandboxId:49e27a6ca857a560be0a77f571a69019bdae8512f1484b539a79fc58d4f07bbd,Metadata:&ContainerMetadata{Name:csi-attacher,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0,State:CONTAINER_RUNNING,CreatedAt:1694429920377264212,Labels:map[string]string{io.kubernetes.container.name: csi-attacher,io.kubernetes.pod.name: csi-hostpath-attacher-0,io.kubernet
es.pod.namespace: kube-system,io.kubernetes.pod.uid: 245e9000-d196-429f-bf8a-ecced1fb4a71,},Annotations:map[string]string{io.kubernetes.container.hash: bcbdd8c4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:afea6bbda7fee9ef197e161d177f305b883956aa2c6c5f200d0ef8e47e5c91d7,PodSandboxId:34a511639f217bbb6b0ae452bc1d1b32786c80b4e0182532887c23fa3c7f775b,Metadata:&ContainerMetadata{Name:csi-resizer,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8,State:CONTAINER_RUNNING,CreatedAt:1694429918617144226,Labels:map[string]string{io.kubernetes.container.name: csi-resizer,io.kubernetes.pod.name: c
si-hostpath-resizer-0,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 62c4130b-1a92-424a-a665-557da4d3f75b,},Annotations:map[string]string{io.kubernetes.container.hash: 7edc6ba2,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b1b9c8904b9085b9df6abd12d0add1c9aed8a78dd0f2f7083b425f26ac956087,PodSandboxId:5ea995cf922569c1e9bd262049382d508dcddd7ac0315397a1f5d5a3e440c00a,Metadata:&ContainerMetadata{Name:minikube-ingress-dns,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f,State:CONTAINER_RUNNING,CreatedAt:1694429914842645175,Labels:map[string]string{io.kubernetes.container
.name: minikube-ingress-dns,io.kubernetes.pod.name: kube-ingress-dns-minikube,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3715ae8a-f6d7-4bfc-b92c-a3586056893e,},Annotations:map[string]string{io.kubernetes.container.hash: 1a3ee56e,io.kubernetes.container.ports: [{\"hostPort\":53,\"containerPort\":53,\"protocol\":\"UDP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:61f634fd9cd05f0fa392285b1139d3ae4e81a90eadbf9025c7de0c8b4bd926b7,PodSandboxId:cbf07fb3335d9565b18d80cef897495304848aaf56a84b17312b5bcd6a0e89e5,Metadata:&ContainerMetadata{Name:csi-external-health-monitor-controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,Annotations:map[string]string{},},ImageRef:registry.k8s.io/sig-st
orage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864,State:CONTAINER_RUNNING,CreatedAt:1694429907056965831,Labels:map[string]string{io.kubernetes.container.name: csi-external-health-monitor-controller,io.kubernetes.pod.name: csi-hostpathplugin-nwdhc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 239b8e34-6457-4c49-8ad7-1947faae7550,},Annotations:map[string]string{io.kubernetes.container.hash: 8fd584ff,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9708c28af064391544beee1ee1dbe9117c10932f1cf6e661c1d52a79e6d2f7a7,PodSandboxId:d1e2db8fa59e24d19015485e5d08f389743050bec184ac543e70dd992d8d7381,Metadata:&ContainerMetadata{Name:gadget,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc410122
7dbead4d44c36b255ca7,Annotations:map[string]string{},},ImageRef:ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7,State:CONTAINER_RUNNING,CreatedAt:1694429904842048305,Labels:map[string]string{io.kubernetes.container.name: gadget,io.kubernetes.pod.name: gadget-9pxcq,io.kubernetes.pod.namespace: gadget,io.kubernetes.pod.uid: 40439794-e1e9-4402-af31-3eab9b7d98f8,},Annotations:map[string]string{io.kubernetes.container.hash: 6293a273,io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/cleanup.sh\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: FallbackToLogsOnError,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:04265140b079bb7ccc93d9ef455b0429a9a0988fa3e126371fa4dbf813de3ed1,PodSandboxId:0fee506f67358ea3c9f7a4055f6e750a7289f0c8a40d187a3a560df3db3442c8,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image
:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429892128354257,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-95cdm,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 8af7ef82-090b-42cf-922e-dfbcdf88d182,},Annotations:map[string]string{io.kubernetes.container.hash: 4caca714,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8666074a25ec46f50b22651fb3f29183d8e3371f0b631263c82ad60c32d414d8,PodSandboxId:d41f1c31726c7b95653708c122d4d4fd4e3c61d08a62a9e0cdbfce6bf80319bb,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image
:&ImageSpec{Image:7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:04b38ca48bcadd0c3644dc7f2ae14358ae41b628f9d1bdbf80f35ff880d9462d,State:CONTAINER_EXITED,CreatedAt:1694429891602095812,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-89wvw,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 914fe891-c74b-4373-b3c8-c01d60957ad2,},Annotations:map[string]string{io.kubernetes.container.hash: 6e1e178a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765,PodSandboxId:80d7317a52cb1af55ac159e6e71a55a887b12ca3707c74105be69bbc176ef640,Metadata:&ContainerMetadata{Name:storage-provisioner,At
tempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694429886633587284,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a512a348-5ded-427c-886d-f1ea3077d8ad,},Annotations:map[string]string{io.kubernetes.container.hash: 770d99a6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a,PodSandboxId:1be58301fe4047ff60f415331ad946bc1243cbc9c3f98882c37401afd056a8cd,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},
Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694429881246439208,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-96wzg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0655ce43-1406-45df-96a8-df0f9f378891,},Annotations:map[string]string{io.kubernetes.container.hash: a50addd7,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8,PodSandboxId:f5e1ab94d36a18af3604ef275add0776e9e456ccc0818d519c24e0637bd0f69e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd
173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694429872875040374,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-2cg8c,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a229e351-155b-4d57-9746-e272bb98598b,},Annotations:map[string]string{io.kubernetes.container.hash: ee3d8c60,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b85b6c4269e32b857d95aee174cbf7d84656
9c0e16351c40e186352008b53f35,PodSandboxId:d2777b0cf12d1948a71d06ac815685e6130ccf2eaa07ee95ffd5df801245aae7,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694429847493099029,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dbe2baf9218ed67291266f724f53af8b,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a
78d89d4cd5d45,PodSandboxId:9597f607cedfee331548ff144a07176ee1000ec43c1d635ac238d6e9dbe4ab0a,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694429847340083504,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: df2e1e6cbbba5f991a1800bb8a6c2332,},Annotations:map[string]string{io.kubernetes.container.hash: 65c298ed,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067,PodSandboxId:882038185b326b8b9adcab1424df
51bf4ce45b853af6af9d7787ff440ab75855,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694429846882677662,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b5b1405ab7c345dd99916b2c2c2183af,},Annotations:map[string]string{io.kubernetes.container.hash: 178d1b9a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3,PodSandboxId:e968bf9990e35b43100d047068ef5261d6d5536630f
4ef88a3b46aa18c98c46c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694429846860033142,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-554886,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b0cb9be75befef2045b9b4e1cf010be,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=acf8078c-7bfc-449e-8cef-4938706df707 name=/runtime.v1alpha2.RuntimeS
ervice/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                                                        CREATED              STATE               NAME                                     ATTEMPT             POD ID
	e6f1300f92f92       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                                              1 second ago         Running             nginx                                    0                   1c0b637d114ac
	a04896a1d5423       ghcr.io/headlamp-k8s/headlamp@sha256:1909603c0614e14bda48b6b59d8166e796b652ca2ac196db5940063edbc21552                                        13 seconds ago       Running             headlamp                                 0                   b514d60c86df5
	c0702cfb6d65f       registry.k8s.io/sig-storage/csi-snapshotter@sha256:291334908ddf71a4661fd7f6d9d97274de8a5378a2b6fdfeb2ce73414a34f82f                          23 seconds ago       Running             csi-snapshotter                          0                   cbf07fb3335d9
	3838108e529c3       registry.k8s.io/sig-storage/csi-provisioner@sha256:1bc653d13b27b8eefbba0799bdb5711819f8b987eaa6eb6750e8ef001958d5a7                          25 seconds ago       Running             csi-provisioner                          0                   cbf07fb3335d9
	3c1c70c518e40       registry.k8s.io/sig-storage/livenessprobe@sha256:42bc492c3c65078b1ccda5dbc416abf0cefdba3e6317416cbc43344cf0ed09b6                            27 seconds ago       Running             liveness-probe                           0                   cbf07fb3335d9
	d4abc5228c176       registry.k8s.io/sig-storage/hostpathplugin@sha256:6fdad87766e53edf987545067e69a0dffb8485cccc546be4efbaa14c9b22ea11                           29 seconds ago       Running             hostpath                                 0                   cbf07fb3335d9
	73047ee4ad1fa       registry.k8s.io/sig-storage/csi-node-driver-registrar@sha256:7caa903cf3f8d1d70c3b7bb3e23223685b05e4f342665877eabe84ae38b92ecc                30 seconds ago       Running             node-driver-registrar                    0                   cbf07fb3335d9
	95953c2448d53       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:3e92b3d1c15220ae0f2f3505fb3a88899a1e48ec85fb777a1a4945ae9db2ce06                                 32 seconds ago       Running             gcp-auth                                 0                   9740261cfaa9f
	c9ec9fe29bd73       registry.k8s.io/ingress-nginx/controller@sha256:74834d3d25b336b62cabeb8bf7f1d788706e2cf1cfd64022de4137ade8881ff2                             34 seconds ago       Running             controller                               0                   ac87beda7aea4
	6eb3a3928b0a4       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      43 seconds ago       Running             volume-snapshot-controller               0                   9573562fe835d
	bdd2d2364045c       registry.k8s.io/sig-storage/snapshot-controller@sha256:4ef48aa1f079b2b6f11d06ee8be30a7f7332fc5ff1e4b20c6b6af68d76925922                      50 seconds ago       Running             volume-snapshot-controller               0                   0cc80513b6b8a
	f14737f31f632       registry.k8s.io/sig-storage/csi-attacher@sha256:66e4ecfa0ec50a88f9cd145e006805816f57040f40662d4cb9e31d10519d9bf0                             56 seconds ago       Running             csi-attacher                             0                   49e27a6ca857a
	afea6bbda7fee       registry.k8s.io/sig-storage/csi-resizer@sha256:0629447f7946e53df3ad775c5595888de1dae5a23bcaae8f68fdab0395af61a8                              58 seconds ago       Running             csi-resizer                              0                   34a511639f217
	b1b9c8904b908       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4abe27f9fc03fedab1d655e2020e6b165faf3bf6de1088ce6cf215a75b78f05f                             About a minute ago   Running             minikube-ingress-dns                     0                   5ea995cf92256
	61f634fd9cd05       registry.k8s.io/sig-storage/csi-external-health-monitor-controller@sha256:317f43813e4e2c3e81823ff16041c8e0714fb80e6d040c6e6c799967ba27d864   About a minute ago   Running             csi-external-health-monitor-controller   0                   cbf07fb3335d9
	9708c28af0643       ghcr.io/inspektor-gadget/inspektor-gadget@sha256:01b7311f9512411ef6530e09dbdd3aeaea0abc4101227dbead4d44c36b255ca7                            About a minute ago   Running             gadget                                   0                   d1e2db8fa59e2
	04265140b079b       7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0                                                                             About a minute ago   Exited              patch                                    0                   0fee506f67358
	8666074a25ec4       7e7451bb70423d31bdadcf0a71a3107b64858eccd7827d066234650b5e7b36b0                                                                             About a minute ago   Exited              create                                   0                   d41f1c31726c7
	199992096f96d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                                             About a minute ago   Running             storage-provisioner                      0                   80d7317a52cb1
	951a4e6a74345       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                                                             About a minute ago   Running             kube-proxy                               0                   1be58301fe404
	c6f785152f05d       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                                             About a minute ago   Running             coredns                                  0                   f5e1ab94d36a1
	b85b6c4269e32       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                                                             2 minutes ago        Running             kube-scheduler                           0                   d2777b0cf12d1
	c5bfc35139dce       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                                             2 minutes ago        Running             etcd                                     0                   9597f607cedfe
	77cce1e0548e5       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                                                             2 minutes ago        Running             kube-apiserver                           0                   882038185b326
	81d35d166d610       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                                                             2 minutes ago        Running             kube-controller-manager                  0                   e968bf9990e35
	
	* 
	* ==> coredns [c6f785152f05d07dbae2fedc136b845c462b1d07a7842e60f56ae9b89386f6c8] <==
	* [INFO] 10.244.0.8:45290 - 41373 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000213146s
	[INFO] 10.244.0.8:44476 - 34673 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000129839s
	[INFO] 10.244.0.8:44476 - 24687 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000146277s
	[INFO] 10.244.0.8:43353 - 23776 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000225767s
	[INFO] 10.244.0.8:43353 - 14562 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000100048s
	[INFO] 10.244.0.8:38377 - 22679 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.00013966s
	[INFO] 10.244.0.8:38377 - 6549 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000270582s
	[INFO] 10.244.0.8:38525 - 6914 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00009834s
	[INFO] 10.244.0.8:38525 - 14588 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000102921s
	[INFO] 10.244.0.8:37652 - 28811 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000076626s
	[INFO] 10.244.0.8:37652 - 49550 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00035517s
	[INFO] 10.244.0.8:47464 - 31265 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000068238s
	[INFO] 10.244.0.8:47464 - 37666 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000089247s
	[INFO] 10.244.0.8:46778 - 43950 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000075552s
	[INFO] 10.244.0.8:46778 - 52652 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000085759s
	[INFO] 10.244.0.19:59535 - 64363 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000336353s
	[INFO] 10.244.0.19:40079 - 11444 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000218574s
	[INFO] 10.244.0.19:47190 - 17013 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000317521s
	[INFO] 10.244.0.19:41944 - 41465 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000165872s
	[INFO] 10.244.0.19:33585 - 31457 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000129588s
	[INFO] 10.244.0.19:60703 - 57198 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000115098s
	[INFO] 10.244.0.19:54182 - 29383 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 344 0.00056064s
	[INFO] 10.244.0.19:43629 - 22789 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.001234053s
	[INFO] 10.244.0.21:42936 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.001327073s
	[INFO] 10.244.0.21:45843 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000128616s
	
	* 
	* ==> describe nodes <==
	* Name:               addons-554886
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-554886
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=addons-554886
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T10_57_35_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-554886
	Annotations:        csi.volume.kubernetes.io/nodeid: {"hostpath.csi.k8s.io":"addons-554886"}
	                    kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 10:57:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-554886
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 10:59:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 10:59:36 +0000   Mon, 11 Sep 2023 10:57:28 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 10:59:36 +0000   Mon, 11 Sep 2023 10:57:28 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 10:59:36 +0000   Mon, 11 Sep 2023 10:57:28 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 10:59:36 +0000   Mon, 11 Sep 2023 10:57:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.217
	  Hostname:    addons-554886
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             3914504Ki
	  pods:               110
	System Info:
	  Machine ID:                 698c538a33834a8798df0fdc57bfacd9
	  System UUID:                698c538a-3383-4a87-98df-0fdc57bfacd9
	  Boot ID:                    057dd6fa-4fdc-43cb-a756-ab25caae2723
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     nginx                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6s
	  gadget                      gadget-9pxcq                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  gcp-auth                    gcp-auth-d4c87556c-dc54c                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         98s
	  headlamp                    headlamp-699c48fb74-8w9jw                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         21s
	  ingress-nginx               ingress-nginx-controller-798b8b85d7-g974z    100m (5%!)(MISSING)     0 (0%!)(MISSING)      90Mi (2%!)(MISSING)        0 (0%!)(MISSING)         101s
	  kube-system                 coredns-5dd5756b68-2cg8c                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     110s
	  kube-system                 csi-hostpath-attacher-0                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 csi-hostpath-resizer-0                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 csi-hostpathplugin-nwdhc                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 etcd-addons-554886                           100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-apiserver-addons-554886                 250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m3s
	  kube-system                 kube-controller-manager-addons-554886        200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         104s
	  kube-system                 kube-proxy-96wzg                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         111s
	  kube-system                 kube-scheduler-addons-554886                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m2s
	  kube-system                 snapshot-controller-58dbcc7b99-2nql9         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 snapshot-controller-58dbcc7b99-9f7nb         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         102s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         103s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%!)(MISSING)  0 (0%!)(MISSING)
	  memory             260Mi (6%!)(MISSING)  170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 91s                    kube-proxy       
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m12s (x8 over 2m12s)  kubelet          Node addons-554886 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s (x8 over 2m12s)  kubelet          Node addons-554886 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m12s (x7 over 2m12s)  kubelet          Node addons-554886 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m3s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m3s                   kubelet          Node addons-554886 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m3s                   kubelet          Node addons-554886 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m3s                   kubelet          Node addons-554886 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m3s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                2m2s                   kubelet          Node addons-554886 status is now: NodeReady
	  Normal  RegisteredNode           111s                   node-controller  Node addons-554886 event: Registered Node addons-554886 in Controller
	
	* 
	* ==> dmesg <==
	* [  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.135166] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.743937] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[Sep11 10:57] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.141524] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.073117] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.156724] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.109352] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.154889] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.112905] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.223172] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[ +10.482421] systemd-fstab-generator[913]: Ignoring "noauto" for root device
	[  +9.303062] systemd-fstab-generator[1246]: Ignoring "noauto" for root device
	[ +25.570784] kauditd_printk_skb: 54 callbacks suppressed
	[Sep11 10:58] kauditd_printk_skb: 2 callbacks suppressed
	[  +7.439133] kauditd_printk_skb: 16 callbacks suppressed
	[  +7.832510] kauditd_printk_skb: 12 callbacks suppressed
	[ +16.732330] kauditd_printk_skb: 16 callbacks suppressed
	[Sep11 10:59] kauditd_printk_skb: 3 callbacks suppressed
	[  +5.243922] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.901711] kauditd_printk_skb: 25 callbacks suppressed
	
	* 
	* ==> etcd [c5bfc35139dceabe199eaf0da765238646c1ce3b8a0680c9b9a78d89d4cd5d45] <==
	* {"level":"warn","ts":"2023-09-11T10:59:02.06519Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"438.968361ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T10:59:02.065264Z","caller":"traceutil/trace.go:171","msg":"trace[77052488] range","detail":"{range_begin:/registry/cronjobs/; range_end:/registry/cronjobs0; response_count:0; response_revision:1047; }","duration":"439.052659ms","start":"2023-09-11T10:59:01.6262Z","end":"2023-09-11T10:59:02.065253Z","steps":["trace[77052488] 'agreement among raft nodes before linearized reading'  (duration: 438.937144ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:02.065294Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T10:59:01.626191Z","time spent":"439.095087ms","remote":"127.0.0.1:36132","response type":"/etcdserverpb.KV/Range","request count":0,"request size":44,"response count":0,"response size":29,"request content":"key:\"/registry/cronjobs/\" range_end:\"/registry/cronjobs0\" count_only:true "}
	{"level":"info","ts":"2023-09-11T10:59:02.063703Z","caller":"traceutil/trace.go:171","msg":"trace[1281774980] linearizableReadLoop","detail":"{readStateIndex:1077; appliedIndex:1076; }","duration":"437.468103ms","start":"2023-09-11T10:59:01.62622Z","end":"2023-09-11T10:59:02.063688Z","steps":["trace[1281774980] 'read index received'  (duration: 436.867021ms)","trace[1281774980] 'applied index is now lower than readState.Index'  (duration: 600.067µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-11T10:59:02.067286Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"278.366362ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:1 size:4149"}
	{"level":"info","ts":"2023-09-11T10:59:02.067497Z","caller":"traceutil/trace.go:171","msg":"trace[1029130702] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:1; response_revision:1047; }","duration":"278.659878ms","start":"2023-09-11T10:59:01.788827Z","end":"2023-09-11T10:59:02.067487Z","steps":["trace[1029130702] 'agreement among raft nodes before linearized reading'  (duration: 278.195931ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:02.068046Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"228.677769ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/runtimeclasses/\" range_end:\"/registry/runtimeclasses0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T10:59:02.068102Z","caller":"traceutil/trace.go:171","msg":"trace[1101319233] range","detail":"{range_begin:/registry/runtimeclasses/; range_end:/registry/runtimeclasses0; response_count:0; response_revision:1047; }","duration":"228.738572ms","start":"2023-09-11T10:59:01.839355Z","end":"2023-09-11T10:59:02.068094Z","steps":["trace[1101319233] 'agreement among raft nodes before linearized reading'  (duration: 228.59701ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:09.672269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"198.412209ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:78420"}
	{"level":"info","ts":"2023-09-11T10:59:09.672336Z","caller":"traceutil/trace.go:171","msg":"trace[1446574203] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1098; }","duration":"198.496608ms","start":"2023-09-11T10:59:09.47383Z","end":"2023-09-11T10:59:09.672327Z","steps":["trace[1446574203] 'range keys from in-memory index tree'  (duration: 198.193599ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:20.443127Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"349.069634ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T10:59:20.443216Z","caller":"traceutil/trace.go:171","msg":"trace[1132995439] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1155; }","duration":"349.173434ms","start":"2023-09-11T10:59:20.094026Z","end":"2023-09-11T10:59:20.443199Z","steps":["trace[1132995439] 'range keys from in-memory index tree'  (duration: 348.995194ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:20.443254Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T10:59:20.094012Z","time spent":"349.233897ms","remote":"127.0.0.1:36072","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2023-09-11T10:59:20.443397Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"345.063128ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" ","response":"range_response_count:1 size:3110"}
	{"level":"info","ts":"2023-09-11T10:59:20.443445Z","caller":"traceutil/trace.go:171","msg":"trace[452450509] range","detail":"{range_begin:/registry/pods/default/; range_end:/registry/pods/default0; response_count:1; response_revision:1155; }","duration":"345.110343ms","start":"2023-09-11T10:59:20.098328Z","end":"2023-09-11T10:59:20.443438Z","steps":["trace[452450509] 'range keys from in-memory index tree'  (duration: 344.977431ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:20.443474Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T10:59:20.098314Z","time spent":"345.15307ms","remote":"127.0.0.1:36118","response type":"/etcdserverpb.KV/Range","request count":0,"request size":50,"response count":1,"response size":3134,"request content":"key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" "}
	{"level":"warn","ts":"2023-09-11T10:59:20.443699Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"342.308938ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:79184"}
	{"level":"info","ts":"2023-09-11T10:59:20.44381Z","caller":"traceutil/trace.go:171","msg":"trace[727986588] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1155; }","duration":"342.420611ms","start":"2023-09-11T10:59:20.101382Z","end":"2023-09-11T10:59:20.443803Z","steps":["trace[727986588] 'range keys from in-memory index tree'  (duration: 342.195865ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:20.443833Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T10:59:20.10138Z","time spent":"342.446645ms","remote":"127.0.0.1:36118","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":17,"response size":79208,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"warn","ts":"2023-09-11T10:59:20.444097Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"342.761335ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:79184"}
	{"level":"info","ts":"2023-09-11T10:59:20.44415Z","caller":"traceutil/trace.go:171","msg":"trace[1359748218] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1155; }","duration":"342.817214ms","start":"2023-09-11T10:59:20.101327Z","end":"2023-09-11T10:59:20.444144Z","steps":["trace[1359748218] 'range keys from in-memory index tree'  (duration: 342.549712ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T10:59:20.44417Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T10:59:20.101315Z","time spent":"342.84914ms","remote":"127.0.0.1:36118","response type":"/etcdserverpb.KV/Range","request count":0,"request size":58,"response count":17,"response size":79208,"request content":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" "}
	{"level":"info","ts":"2023-09-11T10:59:23.550927Z","caller":"traceutil/trace.go:171","msg":"trace[26628167] linearizableReadLoop","detail":"{readStateIndex:1236; appliedIndex:1235; }","duration":"152.398431ms","start":"2023-09-11T10:59:23.398511Z","end":"2023-09-11T10:59:23.55091Z","steps":["trace[26628167] 'read index received'  (duration: 152.285318ms)","trace[26628167] 'applied index is now lower than readState.Index'  (duration: 112.186µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-11T10:59:23.551383Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"152.869344ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/\" range_end:\"/registry/pods/kube-system0\" ","response":"range_response_count:17 size:79196"}
	{"level":"info","ts":"2023-09-11T10:59:23.551444Z","caller":"traceutil/trace.go:171","msg":"trace[754521536] range","detail":"{range_begin:/registry/pods/kube-system/; range_end:/registry/pods/kube-system0; response_count:17; response_revision:1200; }","duration":"152.950074ms","start":"2023-09-11T10:59:23.398485Z","end":"2023-09-11T10:59:23.551435Z","steps":["trace[754521536] 'agreement among raft nodes before linearized reading'  (duration: 152.728063ms)"],"step_count":1}
	
	* 
	* ==> gcp-auth [95953c2448d53e0a4fd7253b696f484a42cfc4abe5be9e218bb9b39fc52ec6bd] <==
	* 2023/09/11 10:59:05 GCP Auth Webhook started!
	2023/09/11 10:59:16 Ready to marshal response ...
	2023/09/11 10:59:16 Ready to write response ...
	2023/09/11 10:59:16 Ready to marshal response ...
	2023/09/11 10:59:16 Ready to write response ...
	2023/09/11 10:59:16 Ready to marshal response ...
	2023/09/11 10:59:16 Ready to write response ...
	2023/09/11 10:59:25 Ready to marshal response ...
	2023/09/11 10:59:25 Ready to write response ...
	2023/09/11 10:59:27 Ready to marshal response ...
	2023/09/11 10:59:27 Ready to write response ...
	2023/09/11 10:59:31 Ready to marshal response ...
	2023/09/11 10:59:31 Ready to write response ...
	
	* 
	* ==> kernel <==
	*  10:59:37 up 2 min,  0 users,  load average: 3.86, 2.15, 0.84
	Linux addons-554886 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [77cce1e0548e55b02e701ad2962c2a83de923fe37e4b7c196ecea2d85c81c067] <==
	* E0911 10:58:37.107867       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.186.210:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.186.210:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.186.210:443: connect: connection refused
	E0911 10:58:37.111585       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.186.210:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.186.210:443/apis/metrics.k8s.io/v1beta1": dial tcp 10.111.186.210:443: connect: connection refused
	W0911 10:58:38.107471       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 10:58:38.107530       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 10:58:38.107539       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 10:58:38.107826       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 10:58:38.108103       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 10:58:38.108711       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 10:58:42.142974       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 10:58:42.143056       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	E0911 10:58:42.143778       1 available_controller.go:460] v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.111.186.210:443/apis/metrics.k8s.io/v1beta1: Get "https://10.111.186.210:443/apis/metrics.k8s.io/v1beta1": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
	I0911 10:58:42.229338       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0911 10:58:42.292843       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0911 10:58:43.824840       1 trace.go:236] Trace[315506063]: "List" accept:application/json, */*,audit-id:ab90563b-b09a-430f-bced-21b8d18a644b,client:192.168.39.1,protocol:HTTP/2.0,resource:pods,scope:namespace,url:/api/v1/namespaces/gcp-auth/pods,user-agent:minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format,verb:LIST (11-Sep-2023 10:58:43.286) (total time: 537ms):
	Trace[315506063]: ["List(recursive=true) etcd3" audit-id:ab90563b-b09a-430f-bced-21b8d18a644b,key:/pods/gcp-auth,resourceVersion:,resourceVersionMatch:,limit:0,continue: 537ms (10:58:43.287)]
	Trace[315506063]: [537.766ms] [537.766ms] END
	I0911 10:58:43.828968       1 trace.go:236] Trace[2028122866]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.39.217,type:*v1.Endpoints,resource:apiServerIPInfo (11-Sep-2023 10:58:43.319) (total time: 509ms):
	Trace[2028122866]: ---"initial value restored" 502ms (10:58:43.822)
	Trace[2028122866]: [509.316807ms] [509.316807ms] END
	I0911 10:59:16.575379       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.103.132.58"}
	E0911 10:59:21.665003       1 controller.go:159] removing "v1beta1.metrics.k8s.io" from AggregationController failed with: resource not found
	I0911 10:59:31.440343       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0911 10:59:31.763712       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.103.142.192"}
	
	* 
	* ==> kube-controller-manager [81d35d166d61082a4a9320eec524a7a7a1a217f44672b189a303bd3247ea9ba3] <==
	* I0911 10:59:03.415948       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="70.831µs"
	I0911 10:59:03.440539       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="244.204µs"
	I0911 10:59:05.525547       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="64.643824ms"
	I0911 10:59:05.525659       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="gcp-auth/gcp-auth-d4c87556c" duration="49.451µs"
	I0911 10:59:16.624114       1 event.go:307] "Event occurred" object="headlamp/headlamp" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set headlamp-699c48fb74 to 1"
	I0911 10:59:16.633974       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"headlamp-699c48fb74-\" is forbidden: error looking up service account headlamp/headlamp: serviceaccount \"headlamp\" not found"
	I0911 10:59:16.653089       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="32.987744ms"
	E0911 10:59:16.653242       1 replica_set.go:557] sync "headlamp/headlamp-699c48fb74" failed with pods "headlamp-699c48fb74-" is forbidden: error looking up service account headlamp/headlamp: serviceaccount "headlamp" not found
	I0911 10:59:16.750590       1 event.go:307] "Event occurred" object="headlamp/headlamp-699c48fb74" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: headlamp-699c48fb74-8w9jw"
	I0911 10:59:16.774268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="120.928385ms"
	I0911 10:59:16.806899       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="32.561322ms"
	I0911 10:59:16.821318       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="14.342886ms"
	I0911 10:59:16.821461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="87.52µs"
	I0911 10:59:18.433180       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="17.363781ms"
	I0911 10:59:18.433368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/snapshot-controller-58dbcc7b99" duration="87.432µs"
	I0911 10:59:21.490383       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/cloud-spanner-emulator-6dcc56475c" duration="5.043µs"
	I0911 10:59:21.762082       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-7c66d45ddc" duration="4.871µs"
	I0911 10:59:22.294165       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="93.539499ms"
	I0911 10:59:22.294440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-798b8b85d7" duration="136.454µs"
	I0911 10:59:22.678313       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0911 10:59:24.565835       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="33.901836ms"
	I0911 10:59:24.566276       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="headlamp/headlamp-699c48fb74" duration="204.233µs"
	I0911 10:59:30.686476       1 replica_set.go:676] "Finished syncing" kind="ReplicationController" key="kube-system/registry" duration="10.84µs"
	I0911 10:59:31.205640       1 event.go:307] "Event occurred" object="default/hpvc" fieldPath="" kind="PersistentVolumeClaim" apiVersion="v1" type="Normal" reason="ExternalProvisioning" message="Waiting for a volume to be created either by the external provisioner 'hostpath.csi.k8s.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered."
	I0911 10:59:33.381846       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/tiller-deploy-7b677967b9" duration="5.039µs"
	
	* 
	* ==> kube-proxy [951a4e6a74345b0620c9fbdeb2ec317ddde1fa2604ef1b976b02a1af4549527a] <==
	* I0911 10:58:05.121629       1 server_others.go:69] "Using iptables proxy"
	I0911 10:58:05.481267       1 node.go:141] Successfully retrieved node IP: 192.168.39.217
	I0911 10:58:06.272399       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 10:58:06.272674       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 10:58:06.279032       1 server_others.go:152] "Using iptables Proxier"
	I0911 10:58:06.279207       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 10:58:06.279428       1 server.go:846] "Version info" version="v1.28.1"
	I0911 10:58:06.279593       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 10:58:06.280424       1 config.go:188] "Starting service config controller"
	I0911 10:58:06.280492       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 10:58:06.280524       1 config.go:97] "Starting endpoint slice config controller"
	I0911 10:58:06.280540       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 10:58:06.281158       1 config.go:315] "Starting node config controller"
	I0911 10:58:06.281201       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 10:58:06.380642       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 10:58:06.380838       1 shared_informer.go:318] Caches are synced for service config
	I0911 10:58:06.392180       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [b85b6c4269e32b857d95aee174cbf7d846569c0e16351c40e186352008b53f35] <==
	* W0911 10:57:31.411785       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 10:57:31.411796       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 10:57:31.414031       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 10:57:31.414093       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 10:57:31.414311       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0911 10:57:31.414352       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0911 10:57:32.257890       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0911 10:57:32.257947       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0911 10:57:32.316611       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 10:57:32.316665       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0911 10:57:32.505575       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 10:57:32.505630       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 10:57:32.611911       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 10:57:32.612000       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0911 10:57:32.613376       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0911 10:57:32.613440       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0911 10:57:32.643133       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 10:57:32.643184       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0911 10:57:32.653880       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0911 10:57:32.653989       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0911 10:57:32.743850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0911 10:57:32.743941       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0911 10:57:32.912934       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 10:57:32.912988       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0911 10:57:34.908484       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 10:57:03 UTC, ends at Mon 2023-09-11 10:59:37 UTC. --
	Sep 11 10:59:32 addons-554886 kubelet[1253]: I0911 10:59:32.812589    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h5bgg\" (UniqueName: \"kubernetes.io/projected/12983c0e-3c81-458d-b449-8130ae0b468b-kube-api-access-h5bgg\") pod \"12983c0e-3c81-458d-b449-8130ae0b468b\" (UID: \"12983c0e-3c81-458d-b449-8130ae0b468b\") "
	Sep 11 10:59:32 addons-554886 kubelet[1253]: I0911 10:59:32.818984    1253 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/12983c0e-3c81-458d-b449-8130ae0b468b-kube-api-access-h5bgg" (OuterVolumeSpecName: "kube-api-access-h5bgg") pod "12983c0e-3c81-458d-b449-8130ae0b468b" (UID: "12983c0e-3c81-458d-b449-8130ae0b468b"). InnerVolumeSpecName "kube-api-access-h5bgg". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 11 10:59:32 addons-554886 kubelet[1253]: I0911 10:59:32.873257    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="12983c0e-3c81-458d-b449-8130ae0b468b" path="/var/lib/kubelet/pods/12983c0e-3c81-458d-b449-8130ae0b468b/volumes"
	Sep 11 10:59:32 addons-554886 kubelet[1253]: I0911 10:59:32.873630    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="8531b6ac-003f-4a6d-aab4-67819497ab11" path="/var/lib/kubelet/pods/8531b6ac-003f-4a6d-aab4-67819497ab11/volumes"
	Sep 11 10:59:32 addons-554886 kubelet[1253]: I0911 10:59:32.875003    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="c3d6d669-7454-4529-b9ac-06abb4face91" path="/var/lib/kubelet/pods/c3d6d669-7454-4529-b9ac-06abb4face91/volumes"
	Sep 11 10:59:32 addons-554886 kubelet[1253]: I0911 10:59:32.913773    1253 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h5bgg\" (UniqueName: \"kubernetes.io/projected/12983c0e-3c81-458d-b449-8130ae0b468b-kube-api-access-h5bgg\") on node \"addons-554886\" DevicePath \"\""
	Sep 11 10:59:33 addons-554886 kubelet[1253]: I0911 10:59:33.837926    1253 scope.go:117] "RemoveContainer" containerID="bf06e103ac7cea5cc86f70fcb6b4a7d8dc3896af7d27b42988ed3283ea2b50ca"
	Sep 11 10:59:34 addons-554886 kubelet[1253]: I0911 10:59:34.628451    1253 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rl7s\" (UniqueName: \"kubernetes.io/projected/871f81ec-dd78-4aa4-89e9-5b99419aa8d5-kube-api-access-7rl7s\") pod \"871f81ec-dd78-4aa4-89e9-5b99419aa8d5\" (UID: \"871f81ec-dd78-4aa4-89e9-5b99419aa8d5\") "
	Sep 11 10:59:34 addons-554886 kubelet[1253]: I0911 10:59:34.636348    1253 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/871f81ec-dd78-4aa4-89e9-5b99419aa8d5-kube-api-access-7rl7s" (OuterVolumeSpecName: "kube-api-access-7rl7s") pod "871f81ec-dd78-4aa4-89e9-5b99419aa8d5" (UID: "871f81ec-dd78-4aa4-89e9-5b99419aa8d5"). InnerVolumeSpecName "kube-api-access-7rl7s". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 11 10:59:34 addons-554886 kubelet[1253]: I0911 10:59:34.729461    1253 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-7rl7s\" (UniqueName: \"kubernetes.io/projected/871f81ec-dd78-4aa4-89e9-5b99419aa8d5-kube-api-access-7rl7s\") on node \"addons-554886\" DevicePath \"\""
	Sep 11 10:59:34 addons-554886 kubelet[1253]: I0911 10:59:34.900996    1253 scope.go:117] "RemoveContainer" containerID="f05ccdb605fbf414b45218fe648b2dc148f8ec8b6a1106211db32d9fc691af46"
	Sep 11 10:59:34 addons-554886 kubelet[1253]: I0911 10:59:34.965898    1253 scope.go:117] "RemoveContainer" containerID="dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2"
	Sep 11 10:59:34 addons-554886 kubelet[1253]: E0911 10:59:34.966906    1253 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 10:59:34 addons-554886 kubelet[1253]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 10:59:34 addons-554886 kubelet[1253]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 10:59:34 addons-554886 kubelet[1253]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 10:59:35 addons-554886 kubelet[1253]: I0911 10:59:35.614131    1253 scope.go:117] "RemoveContainer" containerID="ff2646165ad1919da64b1ec3aaa124a69b3e4e4780378e42f606110bafbd814c"
	Sep 11 10:59:35 addons-554886 kubelet[1253]: I0911 10:59:35.633039    1253 scope.go:117] "RemoveContainer" containerID="dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2"
	Sep 11 10:59:35 addons-554886 kubelet[1253]: E0911 10:59:35.634303    1253 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2\": container with ID starting with dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2 not found: ID does not exist" containerID="dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2"
	Sep 11 10:59:35 addons-554886 kubelet[1253]: I0911 10:59:35.634392    1253 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2"} err="failed to get container status \"dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2\": rpc error: code = NotFound desc = could not find container \"dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2\": container with ID starting with dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2 not found: ID does not exist"
	Sep 11 10:59:35 addons-554886 kubelet[1253]: I0911 10:59:35.673948    1253 scope.go:117] "RemoveContainer" containerID="dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2"
	Sep 11 10:59:35 addons-554886 kubelet[1253]: E0911 10:59:35.674652    1253 kuberuntime_gc.go:150] "Failed to remove container" err="failed to get container status \"dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2\": rpc error: code = NotFound desc = could not find container \"dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2\": container with ID starting with dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2 not found: ID does not exist" containerID="dfa13d081268db49cd4ac0227b9b52431163bb02bfcede1e335f7144bc6808c2"
	Sep 11 10:59:35 addons-554886 kubelet[1253]: I0911 10:59:35.674681    1253 scope.go:117] "RemoveContainer" containerID="c67f96472f2c95f70eac2c2172b3fa176c281df82471ea729e89b245958b6055"
	Sep 11 10:59:36 addons-554886 kubelet[1253]: I0911 10:59:36.872299    1253 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="871f81ec-dd78-4aa4-89e9-5b99419aa8d5" path="/var/lib/kubelet/pods/871f81ec-dd78-4aa4-89e9-5b99419aa8d5/volumes"
	Sep 11 10:59:37 addons-554886 kubelet[1253]: I0911 10:59:37.046468    1253 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=3.337425441 podCreationTimestamp="2023-09-11 10:59:31 +0000 UTC" firstStartedPulling="2023-09-11 10:59:32.998422153 +0000 UTC m=+118.341356140" lastFinishedPulling="2023-09-11 10:59:35.707409299 +0000 UTC m=+121.050343290" observedRunningTime="2023-09-11 10:59:37.045942623 +0000 UTC m=+122.388876629" watchObservedRunningTime="2023-09-11 10:59:37.046412591 +0000 UTC m=+122.389346597"
	
	* 
	* ==> storage-provisioner [199992096f96d2913f8611e2c02102693a2b3ce49b8023f2566cf45732acd765] <==
	* I0911 10:58:08.458308       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 10:58:08.537261       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 10:58:08.537369       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 10:58:08.647030       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 10:58:08.663994       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-554886_4228418e-a4d3-4757-914a-1683fe81d9af!
	I0911 10:58:08.705532       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"be1f0266-cf3c-44e2-9f33-973c3042cab1", APIVersion:"v1", ResourceVersion:"834", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-554886_4228418e-a4d3-4757-914a-1683fe81d9af became leader
	I0911 10:58:09.088467       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-554886_4228418e-a4d3-4757-914a-1683fe81d9af!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-554886 -n addons-554886
helpers_test.go:261: (dbg) Run:  kubectl --context addons-554886 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-89wvw ingress-nginx-admission-patch-95cdm
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/InspektorGadget]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-554886 describe pod ingress-nginx-admission-create-89wvw ingress-nginx-admission-patch-95cdm
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-554886 describe pod ingress-nginx-admission-create-89wvw ingress-nginx-admission-patch-95cdm: exit status 1 (74.109555ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-89wvw" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-95cdm" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-554886 describe pod ingress-nginx-admission-create-89wvw ingress-nginx-admission-patch-95cdm: exit status 1
--- FAIL: TestAddons/parallel/InspektorGadget (7.77s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (155.32s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:148: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-554886
addons_test.go:148: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-554886: exit status 82 (2m1.434367664s)

                                                
                                                
-- stdout --
	* Stopping node "addons-554886"  ...
	* Stopping node "addons-554886"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:150: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-554886" : exit status 82
addons_test.go:152: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-554886
addons_test.go:152: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-554886: exit status 11 (21.60159132s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:154: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-554886" : exit status 11
addons_test.go:156: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-554886
addons_test.go:156: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-554886: exit status 11 (6.14378087s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:158: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-554886" : exit status 11
addons_test.go:161: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-554886
addons_test.go:161: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-554886: exit status 11 (6.144424095s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.217:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:163: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-554886" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (155.32s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddons (166.73s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-508741 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:183: (dbg) Done: kubectl --context ingress-addon-legacy-508741 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (11.348479505s)
addons_test.go:208: (dbg) Run:  kubectl --context ingress-addon-legacy-508741 replace --force -f testdata/nginx-ingress-v1beta1.yaml
addons_test.go:221: (dbg) Run:  kubectl --context ingress-addon-legacy-508741 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cc3c1e4e-9bec-426e-8070-6c2956a4e987] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cc3c1e4e-9bec-426e-8070-6c2956a4e987] Running
addons_test.go:226: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 10.311438918s
addons_test.go:238: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-508741 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
E0911 11:11:58.900099 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:13:47.570698 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:13:47.576021 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:13:47.586337 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:13:47.606687 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:13:47.647072 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:13:47.727454 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:13:47.887931 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:13:48.208587 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:13:48.849606 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:13:50.130250 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:13:52.692127 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
addons_test.go:238: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ingress-addon-legacy-508741 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m11.597060709s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:254: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:262: (dbg) Run:  kubectl --context ingress-addon-legacy-508741 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
addons_test.go:267: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-508741 ip
addons_test.go:273: (dbg) Run:  nslookup hello-john.test 192.168.39.127
addons_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-508741 addons disable ingress-dns --alsologtostderr -v=1
E0911 11:13:57.812349 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
addons_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-508741 addons disable ingress-dns --alsologtostderr -v=1: (2.747808133s)
addons_test.go:287: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-508741 addons disable ingress --alsologtostderr -v=1
addons_test.go:287: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-508741 addons disable ingress --alsologtostderr -v=1: (7.747794639s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ingress-addon-legacy-508741 -n ingress-addon-legacy-508741
helpers_test.go:244: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestIngressAddonLegacy/serial/ValidateIngressAddons]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-508741 logs -n 25
E0911 11:14:08.052914 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-508741 logs -n 25: (1.108746094s)
helpers_test.go:252: TestIngressAddonLegacy/serial/ValidateIngressAddons logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                   Args                                    |           Profile           |  User   | Version |     Start Time      |      End Time       |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	| image          | functional-312672 image ls                                                | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	| image          | functional-312672 image save                                              | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-312672                  |                             |         |         |                     |                     |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-312672 image rm                                                | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-312672                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-312672 image ls                                                | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	| image          | functional-312672 image load                                              | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	|                | /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| update-context | functional-312672                                                         | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-312672                                                         | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| update-context | functional-312672                                                         | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	|                | update-context                                                            |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                    |                             |         |         |                     |                     |
	| image          | functional-312672 image ls                                                | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	| image          | functional-312672 image save --daemon                                     | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	|                | gcr.io/google-containers/addon-resizer:functional-312672                  |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-312672                                                         | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	|                | image ls --format yaml                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-312672                                                         | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	|                | image ls --format short                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-312672                                                         | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	|                | image ls --format json                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| image          | functional-312672                                                         | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	|                | image ls --format table                                                   |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	| ssh            | functional-312672 ssh pgrep                                               | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC |                     |
	|                | buildkitd                                                                 |                             |         |         |                     |                     |
	| image          | functional-312672 image build -t                                          | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	|                | localhost/my-image:functional-312672                                      |                             |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                          |                             |         |         |                     |                     |
	| image          | functional-312672 image ls                                                | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	| delete         | -p functional-312672                                                      | functional-312672           | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:09 UTC |
	| start          | -p ingress-addon-legacy-508741                                            | ingress-addon-legacy-508741 | jenkins | v1.31.2 | 11 Sep 23 11:09 UTC | 11 Sep 23 11:11 UTC |
	|                | --kubernetes-version=v1.18.20                                             |                             |         |         |                     |                     |
	|                | --memory=4096 --wait=true                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr                                                         |                             |         |         |                     |                     |
	|                | -v=5 --driver=kvm2                                                        |                             |         |         |                     |                     |
	|                | --container-runtime=crio                                                  |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-508741                                               | ingress-addon-legacy-508741 | jenkins | v1.31.2 | 11 Sep 23 11:11 UTC | 11 Sep 23 11:11 UTC |
	|                | addons enable ingress                                                     |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-508741                                               | ingress-addon-legacy-508741 | jenkins | v1.31.2 | 11 Sep 23 11:11 UTC | 11 Sep 23 11:11 UTC |
	|                | addons enable ingress-dns                                                 |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=5                                                    |                             |         |         |                     |                     |
	| ssh            | ingress-addon-legacy-508741                                               | ingress-addon-legacy-508741 | jenkins | v1.31.2 | 11 Sep 23 11:11 UTC |                     |
	|                | ssh curl -s http://127.0.0.1/                                             |                             |         |         |                     |                     |
	|                | -H 'Host: nginx.example.com'                                              |                             |         |         |                     |                     |
	| ip             | ingress-addon-legacy-508741 ip                                            | ingress-addon-legacy-508741 | jenkins | v1.31.2 | 11 Sep 23 11:13 UTC | 11 Sep 23 11:13 UTC |
	| addons         | ingress-addon-legacy-508741                                               | ingress-addon-legacy-508741 | jenkins | v1.31.2 | 11 Sep 23 11:13 UTC | 11 Sep 23 11:14 UTC |
	|                | addons disable ingress-dns                                                |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	| addons         | ingress-addon-legacy-508741                                               | ingress-addon-legacy-508741 | jenkins | v1.31.2 | 11 Sep 23 11:14 UTC | 11 Sep 23 11:14 UTC |
	|                | addons disable ingress                                                    |                             |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                    |                             |         |         |                     |                     |
	|----------------|---------------------------------------------------------------------------|-----------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:09:49
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:09:49.329356 2230538 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:09:49.329523 2230538 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:09:49.329536 2230538 out.go:309] Setting ErrFile to fd 2...
	I0911 11:09:49.329542 2230538 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:09:49.329748 2230538 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:09:49.330384 2230538 out.go:303] Setting JSON to false
	I0911 11:09:49.331377 2230538 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":233540,"bootTime":1694197049,"procs":190,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:09:49.331438 2230538 start.go:138] virtualization: kvm guest
	I0911 11:09:49.334240 2230538 out.go:177] * [ingress-addon-legacy-508741] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:09:49.336025 2230538 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:09:49.336051 2230538 notify.go:220] Checking for updates...
	I0911 11:09:49.337806 2230538 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:09:49.339543 2230538 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:09:49.341167 2230538 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:09:49.342622 2230538 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:09:49.344322 2230538 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:09:49.346129 2230538 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:09:49.383807 2230538 out.go:177] * Using the kvm2 driver based on user configuration
	I0911 11:09:49.385670 2230538 start.go:298] selected driver: kvm2
	I0911 11:09:49.385694 2230538 start.go:902] validating driver "kvm2" against <nil>
	I0911 11:09:49.385709 2230538 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:09:49.386436 2230538 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:09:49.386528 2230538 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 11:09:49.402819 2230538 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 11:09:49.402878 2230538 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 11:09:49.403102 2230538 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 11:09:49.403141 2230538 cni.go:84] Creating CNI manager for ""
	I0911 11:09:49.403154 2230538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 11:09:49.403161 2230538 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 11:09:49.403171 2230538 start_flags.go:321] config:
	{Name:ingress-addon-legacy-508741 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-508741 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.loca
l ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:09:49.403302 2230538 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:09:49.405157 2230538 out.go:177] * Starting control plane node ingress-addon-legacy-508741 in cluster ingress-addon-legacy-508741
	I0911 11:09:49.406532 2230538 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0911 11:09:49.431834 2230538 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0911 11:09:49.431868 2230538 cache.go:57] Caching tarball of preloaded images
	I0911 11:09:49.432076 2230538 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0911 11:09:49.433932 2230538 out.go:177] * Downloading Kubernetes v1.18.20 preload ...
	I0911 11:09:49.435533 2230538 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0911 11:09:49.469002 2230538 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.18.20/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4?checksum=md5:0d02e096853189c5b37812b400898e14 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4
	I0911 11:09:52.720303 2230538 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0911 11:09:52.720410 2230538 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 ...
	I0911 11:09:53.679071 2230538 cache.go:60] Finished verifying existence of preloaded tar for  v1.18.20 on crio
	I0911 11:09:53.679462 2230538 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/config.json ...
	I0911 11:09:53.679496 2230538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/config.json: {Name:mk3256d6bd54afcd16bc85915232650c824a2224 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:09:53.679721 2230538 start.go:365] acquiring machines lock for ingress-addon-legacy-508741: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 11:09:53.679774 2230538 start.go:369] acquired machines lock for "ingress-addon-legacy-508741" in 25.644µs
	I0911 11:09:53.679804 2230538 start.go:93] Provisioning new machine with config: &{Name:ingress-addon-legacy-508741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Ku
bernetesConfig:{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-508741 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mou
ntIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 11:09:53.679890 2230538 start.go:125] createHost starting for "" (driver="kvm2")
	I0911 11:09:53.682695 2230538 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4096MB, Disk=20000MB) ...
	I0911 11:09:53.682865 2230538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:09:53.682925 2230538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:09:53.698190 2230538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38195
	I0911 11:09:53.698629 2230538 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:09:53.699232 2230538 main.go:141] libmachine: Using API Version  1
	I0911 11:09:53.699249 2230538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:09:53.699546 2230538 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:09:53.699746 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetMachineName
	I0911 11:09:53.699859 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .DriverName
	I0911 11:09:53.700024 2230538 start.go:159] libmachine.API.Create for "ingress-addon-legacy-508741" (driver="kvm2")
	I0911 11:09:53.700065 2230538 client.go:168] LocalClient.Create starting
	I0911 11:09:53.700108 2230538 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem
	I0911 11:09:53.700152 2230538 main.go:141] libmachine: Decoding PEM data...
	I0911 11:09:53.700171 2230538 main.go:141] libmachine: Parsing certificate...
	I0911 11:09:53.700229 2230538 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem
	I0911 11:09:53.700252 2230538 main.go:141] libmachine: Decoding PEM data...
	I0911 11:09:53.700263 2230538 main.go:141] libmachine: Parsing certificate...
	I0911 11:09:53.700279 2230538 main.go:141] libmachine: Running pre-create checks...
	I0911 11:09:53.700290 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .PreCreateCheck
	I0911 11:09:53.700591 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetConfigRaw
	I0911 11:09:53.700950 2230538 main.go:141] libmachine: Creating machine...
	I0911 11:09:53.700965 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .Create
	I0911 11:09:53.701097 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Creating KVM machine...
	I0911 11:09:53.702410 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found existing default KVM network
	I0911 11:09:53.703112 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:53.702981 2230573 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000029890}
	I0911 11:09:53.708745 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | trying to create private KVM network mk-ingress-addon-legacy-508741 192.168.39.0/24...
	I0911 11:09:53.789617 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Setting up store path in /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741 ...
	I0911 11:09:53.789673 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Building disk image from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0911 11:09:53.789688 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | private KVM network mk-ingress-addon-legacy-508741 192.168.39.0/24 created
	I0911 11:09:53.789711 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:53.789555 2230573 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:09:53.789733 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Downloading /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0911 11:09:54.038674 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:54.038534 2230573 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741/id_rsa...
	I0911 11:09:54.217650 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:54.217490 2230573 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741/ingress-addon-legacy-508741.rawdisk...
	I0911 11:09:54.217688 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Writing magic tar header
	I0911 11:09:54.217718 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Writing SSH key tar header
	I0911 11:09:54.217731 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:54.217619 2230573 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741 ...
	I0911 11:09:54.217747 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741
	I0911 11:09:54.217779 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines
	I0911 11:09:54.217795 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741 (perms=drwx------)
	I0911 11:09:54.217811 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:09:54.217827 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines (perms=drwxr-xr-x)
	I0911 11:09:54.217835 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273
	I0911 11:09:54.217843 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube (perms=drwxr-xr-x)
	I0911 11:09:54.217851 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0911 11:09:54.217859 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273 (perms=drwxrwxr-x)
	I0911 11:09:54.217883 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Checking permissions on dir: /home/jenkins
	I0911 11:09:54.217897 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Checking permissions on dir: /home
	I0911 11:09:54.217911 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Skipping /home - not owner
	I0911 11:09:54.217926 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0911 11:09:54.217939 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0911 11:09:54.217948 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Creating domain...
	I0911 11:09:54.219183 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) define libvirt domain using xml: 
	I0911 11:09:54.219213 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) <domain type='kvm'>
	I0911 11:09:54.219222 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)   <name>ingress-addon-legacy-508741</name>
	I0911 11:09:54.219236 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)   <memory unit='MiB'>4096</memory>
	I0911 11:09:54.219247 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)   <vcpu>2</vcpu>
	I0911 11:09:54.219264 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)   <features>
	I0911 11:09:54.219274 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <acpi/>
	I0911 11:09:54.219282 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <apic/>
	I0911 11:09:54.219289 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <pae/>
	I0911 11:09:54.219297 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     
	I0911 11:09:54.219336 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)   </features>
	I0911 11:09:54.219367 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)   <cpu mode='host-passthrough'>
	I0911 11:09:54.219394 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)   
	I0911 11:09:54.219417 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)   </cpu>
	I0911 11:09:54.219431 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)   <os>
	I0911 11:09:54.219444 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <type>hvm</type>
	I0911 11:09:54.219453 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <boot dev='cdrom'/>
	I0911 11:09:54.219462 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <boot dev='hd'/>
	I0911 11:09:54.219475 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <bootmenu enable='no'/>
	I0911 11:09:54.219487 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)   </os>
	I0911 11:09:54.219500 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)   <devices>
	I0911 11:09:54.219522 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <disk type='file' device='cdrom'>
	I0911 11:09:54.219546 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741/boot2docker.iso'/>
	I0911 11:09:54.219561 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <target dev='hdc' bus='scsi'/>
	I0911 11:09:54.219574 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <readonly/>
	I0911 11:09:54.219587 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     </disk>
	I0911 11:09:54.219599 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <disk type='file' device='disk'>
	I0911 11:09:54.219616 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0911 11:09:54.219642 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741/ingress-addon-legacy-508741.rawdisk'/>
	I0911 11:09:54.219683 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <target dev='hda' bus='virtio'/>
	I0911 11:09:54.219712 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     </disk>
	I0911 11:09:54.219738 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <interface type='network'>
	I0911 11:09:54.219760 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <source network='mk-ingress-addon-legacy-508741'/>
	I0911 11:09:54.219775 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <model type='virtio'/>
	I0911 11:09:54.219784 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     </interface>
	I0911 11:09:54.219797 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <interface type='network'>
	I0911 11:09:54.219805 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <source network='default'/>
	I0911 11:09:54.219814 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <model type='virtio'/>
	I0911 11:09:54.219822 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     </interface>
	I0911 11:09:54.219847 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <serial type='pty'>
	I0911 11:09:54.219863 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <target port='0'/>
	I0911 11:09:54.219876 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     </serial>
	I0911 11:09:54.219890 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <console type='pty'>
	I0911 11:09:54.219903 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <target type='serial' port='0'/>
	I0911 11:09:54.219921 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     </console>
	I0911 11:09:54.219936 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     <rng model='virtio'>
	I0911 11:09:54.219952 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)       <backend model='random'>/dev/random</backend>
	I0911 11:09:54.219969 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     </rng>
	I0911 11:09:54.219978 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     
	I0911 11:09:54.219987 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)     
	I0911 11:09:54.219993 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741)   </devices>
	I0911 11:09:54.220001 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) </domain>
	I0911 11:09:54.220014 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) 
	I0911 11:09:54.224998 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:55:07:ba in network default
	I0911 11:09:54.225642 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Ensuring networks are active...
	I0911 11:09:54.225663 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:09:54.226502 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Ensuring network default is active
	I0911 11:09:54.226897 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Ensuring network mk-ingress-addon-legacy-508741 is active
	I0911 11:09:54.227378 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Getting domain xml...
	I0911 11:09:54.228160 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Creating domain...
	I0911 11:09:55.494694 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Waiting to get IP...
	I0911 11:09:55.495642 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:09:55.496066 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:09:55.496123 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:55.496062 2230573 retry.go:31] will retry after 237.539488ms: waiting for machine to come up
	I0911 11:09:55.735924 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:09:55.736496 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:09:55.736527 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:55.736443 2230573 retry.go:31] will retry after 355.025357ms: waiting for machine to come up
	I0911 11:09:56.093416 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:09:56.093865 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:09:56.093893 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:56.093823 2230573 retry.go:31] will retry after 482.644523ms: waiting for machine to come up
	I0911 11:09:56.578522 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:09:56.579062 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:09:56.579094 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:56.579015 2230573 retry.go:31] will retry after 589.186454ms: waiting for machine to come up
	I0911 11:09:57.170051 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:09:57.170570 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:09:57.170626 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:57.170503 2230573 retry.go:31] will retry after 680.695768ms: waiting for machine to come up
	I0911 11:09:57.852375 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:09:57.854431 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:09:57.854458 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:57.854354 2230573 retry.go:31] will retry after 926.244132ms: waiting for machine to come up
	I0911 11:09:58.781961 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:09:58.782303 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:09:58.782338 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:58.782250 2230573 retry.go:31] will retry after 774.936552ms: waiting for machine to come up
	I0911 11:09:59.559444 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:09:59.559912 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:09:59.559961 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:09:59.559860 2230573 retry.go:31] will retry after 1.398157038s: waiting for machine to come up
	I0911 11:10:00.959358 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:00.959909 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:10:00.959940 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:10:00.959859 2230573 retry.go:31] will retry after 1.728650601s: waiting for machine to come up
	I0911 11:10:02.690222 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:02.690626 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:10:02.690653 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:10:02.690580 2230573 retry.go:31] will retry after 2.009720576s: waiting for machine to come up
	I0911 11:10:04.702181 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:04.702640 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:10:04.702668 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:10:04.702589 2230573 retry.go:31] will retry after 1.803599544s: waiting for machine to come up
	I0911 11:10:06.507586 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:06.508082 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:10:06.508114 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:10:06.508018 2230573 retry.go:31] will retry after 2.47092164s: waiting for machine to come up
	I0911 11:10:08.980749 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:08.981308 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:10:08.981345 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:10:08.981236 2230573 retry.go:31] will retry after 4.067506308s: waiting for machine to come up
	I0911 11:10:13.053536 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:13.053921 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find current IP address of domain ingress-addon-legacy-508741 in network mk-ingress-addon-legacy-508741
	I0911 11:10:13.053952 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | I0911 11:10:13.053904 2230573 retry.go:31] will retry after 5.516918592s: waiting for machine to come up
	I0911 11:10:18.576130 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:18.576655 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Found IP for machine: 192.168.39.127
	I0911 11:10:18.576677 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Reserving static IP address...
	I0911 11:10:18.576703 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has current primary IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:18.577119 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | unable to find host DHCP lease matching {name: "ingress-addon-legacy-508741", mac: "52:54:00:24:c1:5a", ip: "192.168.39.127"} in network mk-ingress-addon-legacy-508741
	I0911 11:10:18.666846 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Getting to WaitForSSH function...
	I0911 11:10:18.666884 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Reserved static IP address: 192.168.39.127
	I0911 11:10:18.666901 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Waiting for SSH to be available...
	I0911 11:10:18.669836 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:18.670224 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:minikube Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:18.670286 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:18.670402 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Using SSH client type: external
	I0911 11:10:18.670435 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741/id_rsa (-rw-------)
	I0911 11:10:18.670476 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.127 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 11:10:18.670497 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | About to run SSH command:
	I0911 11:10:18.670511 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | exit 0
	I0911 11:10:18.768932 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | SSH cmd err, output: <nil>: 
	I0911 11:10:18.769246 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) KVM machine creation complete!
	I0911 11:10:18.769605 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetConfigRaw
	I0911 11:10:18.770185 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .DriverName
	I0911 11:10:18.770400 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .DriverName
	I0911 11:10:18.770612 2230538 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0911 11:10:18.770631 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetState
	I0911 11:10:18.772042 2230538 main.go:141] libmachine: Detecting operating system of created instance...
	I0911 11:10:18.772056 2230538 main.go:141] libmachine: Waiting for SSH to be available...
	I0911 11:10:18.772063 2230538 main.go:141] libmachine: Getting to WaitForSSH function...
	I0911 11:10:18.772070 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:18.774451 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:18.774781 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:18.774820 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:18.774972 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHPort
	I0911 11:10:18.775163 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:18.775336 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:18.775550 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHUsername
	I0911 11:10:18.775819 2230538 main.go:141] libmachine: Using SSH client type: native
	I0911 11:10:18.776282 2230538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0911 11:10:18.776299 2230538 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0911 11:10:18.904353 2230538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:10:18.904379 2230538 main.go:141] libmachine: Detecting the provisioner...
	I0911 11:10:18.904388 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:18.907185 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:18.907504 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:18.907537 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:18.907757 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHPort
	I0911 11:10:18.908027 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:18.908226 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:18.908379 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHUsername
	I0911 11:10:18.908560 2230538 main.go:141] libmachine: Using SSH client type: native
	I0911 11:10:18.908984 2230538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0911 11:10:18.908997 2230538 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0911 11:10:19.037560 2230538 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0911 11:10:19.037665 2230538 main.go:141] libmachine: found compatible host: buildroot
	I0911 11:10:19.037686 2230538 main.go:141] libmachine: Provisioning with buildroot...
	I0911 11:10:19.037701 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetMachineName
	I0911 11:10:19.038030 2230538 buildroot.go:166] provisioning hostname "ingress-addon-legacy-508741"
	I0911 11:10:19.038061 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetMachineName
	I0911 11:10:19.038285 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:19.041231 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.041604 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:19.041631 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.041791 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHPort
	I0911 11:10:19.041982 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:19.042127 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:19.042213 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHUsername
	I0911 11:10:19.042327 2230538 main.go:141] libmachine: Using SSH client type: native
	I0911 11:10:19.042719 2230538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0911 11:10:19.042737 2230538 main.go:141] libmachine: About to run SSH command:
	sudo hostname ingress-addon-legacy-508741 && echo "ingress-addon-legacy-508741" | sudo tee /etc/hostname
	I0911 11:10:19.189268 2230538 main.go:141] libmachine: SSH cmd err, output: <nil>: ingress-addon-legacy-508741
	
	I0911 11:10:19.189300 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:19.192372 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.192709 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:19.192743 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.192940 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHPort
	I0911 11:10:19.193143 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:19.193353 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:19.193529 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHUsername
	I0911 11:10:19.193674 2230538 main.go:141] libmachine: Using SSH client type: native
	I0911 11:10:19.194088 2230538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0911 11:10:19.194118 2230538 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\singress-addon-legacy-508741' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ingress-addon-legacy-508741/g' /etc/hosts;
				else 
					echo '127.0.1.1 ingress-addon-legacy-508741' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:10:19.333476 2230538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:10:19.333514 2230538 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 11:10:19.333546 2230538 buildroot.go:174] setting up certificates
	I0911 11:10:19.333563 2230538 provision.go:83] configureAuth start
	I0911 11:10:19.333578 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetMachineName
	I0911 11:10:19.333920 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetIP
	I0911 11:10:19.337086 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.337483 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:19.337518 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.337651 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:19.340017 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.340294 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:19.340328 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.340507 2230538 provision.go:138] copyHostCerts
	I0911 11:10:19.340550 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:10:19.340589 2230538 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 11:10:19.340598 2230538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:10:19.340666 2230538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 11:10:19.340744 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:10:19.340760 2230538 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 11:10:19.340767 2230538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:10:19.340788 2230538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 11:10:19.340853 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:10:19.340874 2230538 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 11:10:19.340881 2230538 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:10:19.340908 2230538 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 11:10:19.340957 2230538 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.ingress-addon-legacy-508741 san=[192.168.39.127 192.168.39.127 localhost 127.0.0.1 minikube ingress-addon-legacy-508741]
	I0911 11:10:19.432059 2230538 provision.go:172] copyRemoteCerts
	I0911 11:10:19.432127 2230538 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:10:19.432157 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:19.435267 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.435726 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:19.435765 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.436011 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHPort
	I0911 11:10:19.436226 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:19.436411 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHUsername
	I0911 11:10:19.436563 2230538 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741/id_rsa Username:docker}
	I0911 11:10:19.531356 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0911 11:10:19.531448 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:10:19.554947 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0911 11:10:19.555047 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1253 bytes)
	I0911 11:10:19.578655 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0911 11:10:19.578746 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 11:10:19.607339 2230538 provision.go:86] duration metric: configureAuth took 273.758199ms
	I0911 11:10:19.607413 2230538 buildroot.go:189] setting minikube options for container-runtime
	I0911 11:10:19.607666 2230538 config.go:182] Loaded profile config "ingress-addon-legacy-508741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0911 11:10:19.607798 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:19.610791 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.611219 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:19.611264 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.611450 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHPort
	I0911 11:10:19.611687 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:19.611839 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:19.611990 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHUsername
	I0911 11:10:19.612131 2230538 main.go:141] libmachine: Using SSH client type: native
	I0911 11:10:19.612543 2230538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0911 11:10:19.612559 2230538 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:10:19.953448 2230538 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:10:19.953476 2230538 main.go:141] libmachine: Checking connection to Docker...
	I0911 11:10:19.953486 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetURL
	I0911 11:10:19.955032 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Using libvirt version 6000000
	I0911 11:10:19.957752 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.958134 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:19.958173 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.958392 2230538 main.go:141] libmachine: Docker is up and running!
	I0911 11:10:19.958409 2230538 main.go:141] libmachine: Reticulating splines...
	I0911 11:10:19.958419 2230538 client.go:171] LocalClient.Create took 26.258342805s
	I0911 11:10:19.958450 2230538 start.go:167] duration metric: libmachine.API.Create for "ingress-addon-legacy-508741" took 26.258425261s
	I0911 11:10:19.958464 2230538 start.go:300] post-start starting for "ingress-addon-legacy-508741" (driver="kvm2")
	I0911 11:10:19.958478 2230538 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:10:19.958505 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .DriverName
	I0911 11:10:19.958761 2230538 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:10:19.958791 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:19.961588 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.962008 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:19.962039 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:19.962196 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHPort
	I0911 11:10:19.962427 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:19.962580 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHUsername
	I0911 11:10:19.962782 2230538 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741/id_rsa Username:docker}
	I0911 11:10:20.059461 2230538 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:10:20.063783 2230538 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 11:10:20.063812 2230538 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 11:10:20.063888 2230538 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 11:10:20.063995 2230538 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 11:10:20.064010 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> /etc/ssl/certs/22224712.pem
	I0911 11:10:20.064138 2230538 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:10:20.073581 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:10:20.096751 2230538 start.go:303] post-start completed in 138.248849ms
	I0911 11:10:20.096849 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetConfigRaw
	I0911 11:10:20.097564 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetIP
	I0911 11:10:20.100353 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:20.100706 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:20.100760 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:20.101140 2230538 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/config.json ...
	I0911 11:10:20.101400 2230538 start.go:128] duration metric: createHost completed in 26.421495451s
	I0911 11:10:20.101434 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:20.103935 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:20.104251 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:20.104283 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:20.104430 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHPort
	I0911 11:10:20.104631 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:20.104841 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:20.105002 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHUsername
	I0911 11:10:20.105227 2230538 main.go:141] libmachine: Using SSH client type: native
	I0911 11:10:20.105633 2230538 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.127 22 <nil> <nil>}
	I0911 11:10:20.105645 2230538 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 11:10:20.233681 2230538 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694430620.215633339
	
	I0911 11:10:20.233709 2230538 fix.go:206] guest clock: 1694430620.215633339
	I0911 11:10:20.233717 2230538 fix.go:219] Guest: 2023-09-11 11:10:20.215633339 +0000 UTC Remote: 2023-09-11 11:10:20.101415312 +0000 UTC m=+30.808943768 (delta=114.218027ms)
	I0911 11:10:20.233737 2230538 fix.go:190] guest clock delta is within tolerance: 114.218027ms
	I0911 11:10:20.233743 2230538 start.go:83] releasing machines lock for "ingress-addon-legacy-508741", held for 26.553952818s
	I0911 11:10:20.233765 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .DriverName
	I0911 11:10:20.234144 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetIP
	I0911 11:10:20.236841 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:20.237168 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:20.237206 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:20.237347 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .DriverName
	I0911 11:10:20.238059 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .DriverName
	I0911 11:10:20.238269 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .DriverName
	I0911 11:10:20.238386 2230538 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:10:20.238455 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:20.238517 2230538 ssh_runner.go:195] Run: cat /version.json
	I0911 11:10:20.238546 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:20.241322 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:20.241573 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:20.241716 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:20.241751 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:20.241887 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHPort
	I0911 11:10:20.241980 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:20.242030 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:20.242086 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:20.242213 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHPort
	I0911 11:10:20.242295 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHUsername
	I0911 11:10:20.242388 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:20.242453 2230538 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741/id_rsa Username:docker}
	I0911 11:10:20.242492 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHUsername
	I0911 11:10:20.242592 2230538 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741/id_rsa Username:docker}
	I0911 11:10:20.355559 2230538 ssh_runner.go:195] Run: systemctl --version
	I0911 11:10:20.361431 2230538 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:10:20.519012 2230538 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 11:10:20.526233 2230538 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 11:10:20.526323 2230538 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:10:20.542389 2230538 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 11:10:20.542441 2230538 start.go:466] detecting cgroup driver to use...
	I0911 11:10:20.542522 2230538 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:10:20.556521 2230538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:10:20.570144 2230538 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:10:20.570207 2230538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:10:20.584215 2230538 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:10:20.598951 2230538 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:10:20.703375 2230538 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:10:20.827665 2230538 docker.go:212] disabling docker service ...
	I0911 11:10:20.827752 2230538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:10:20.842444 2230538 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:10:20.855519 2230538 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:10:20.964940 2230538 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:10:21.076005 2230538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:10:21.090380 2230538 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:10:21.109101 2230538 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0911 11:10:21.109169 2230538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:10:21.119903 2230538 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:10:21.119974 2230538 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:10:21.130680 2230538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:10:21.140661 2230538 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:10:21.151888 2230538 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:10:21.163531 2230538 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:10:21.173321 2230538 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 11:10:21.173401 2230538 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 11:10:21.188389 2230538 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:10:21.198906 2230538 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:10:21.298705 2230538 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:10:21.478348 2230538 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:10:21.478452 2230538 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:10:21.485276 2230538 start.go:534] Will wait 60s for crictl version
	I0911 11:10:21.485378 2230538 ssh_runner.go:195] Run: which crictl
	I0911 11:10:21.489130 2230538 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:10:21.530204 2230538 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 11:10:21.530306 2230538 ssh_runner.go:195] Run: crio --version
	I0911 11:10:21.581392 2230538 ssh_runner.go:195] Run: crio --version
	I0911 11:10:21.636735 2230538 out.go:177] * Preparing Kubernetes v1.18.20 on CRI-O 1.24.1 ...
	I0911 11:10:21.638387 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetIP
	I0911 11:10:21.641478 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:21.641847 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:21.641881 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:21.642050 2230538 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 11:10:21.646705 2230538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:10:21.659014 2230538 preload.go:132] Checking if preload exists for k8s version v1.18.20 and runtime crio
	I0911 11:10:21.659074 2230538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:10:21.692808 2230538 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0911 11:10:21.692914 2230538 ssh_runner.go:195] Run: which lz4
	I0911 11:10:21.696853 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0911 11:10:21.696988 2230538 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 11:10:21.701239 2230538 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 11:10:21.701274 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.18.20-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (495439307 bytes)
	I0911 11:10:23.816470 2230538 crio.go:444] Took 2.119535 seconds to copy over tarball
	I0911 11:10:23.816546 2230538 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 11:10:27.598538 2230538 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.781962651s)
	I0911 11:10:27.598568 2230538 crio.go:451] Took 3.782068 seconds to extract the tarball
	I0911 11:10:27.598578 2230538 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 11:10:27.643890 2230538 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:10:27.697280 2230538 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.18.20". assuming images are not preloaded.
	I0911 11:10:27.697318 2230538 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.18.20 registry.k8s.io/kube-controller-manager:v1.18.20 registry.k8s.io/kube-scheduler:v1.18.20 registry.k8s.io/kube-proxy:v1.18.20 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.3-0 registry.k8s.io/coredns:1.6.7 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 11:10:27.697381 2230538 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:10:27.697407 2230538 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.18.20
	I0911 11:10:27.697439 2230538 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0911 11:10:27.697478 2230538 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 11:10:27.697491 2230538 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.18.20
	I0911 11:10:27.697424 2230538 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0911 11:10:27.697616 2230538 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.7
	I0911 11:10:27.697643 2230538 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.18.20
	I0911 11:10:27.698735 2230538 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.18.20
	I0911 11:10:27.698736 2230538 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0911 11:10:27.698774 2230538 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 11:10:27.698751 2230538 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:10:27.698785 2230538 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.18.20
	I0911 11:10:27.698743 2230538 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.7: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.7
	I0911 11:10:27.698805 2230538 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.18.20: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.18.20
	I0911 11:10:27.698736 2230538 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0911 11:10:27.854665 2230538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.18.20
	I0911 11:10:27.874124 2230538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.7
	I0911 11:10:27.875911 2230538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.3-0
	I0911 11:10:27.879076 2230538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.18.20
	I0911 11:10:27.881311 2230538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.18.20
	I0911 11:10:27.885522 2230538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 11:10:27.891680 2230538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0911 11:10:27.935148 2230538 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.18.20" needs transfer: "registry.k8s.io/kube-scheduler:v1.18.20" does not exist at hash "a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346" in container runtime
	I0911 11:10:27.935207 2230538 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.18.20
	I0911 11:10:27.935260 2230538 ssh_runner.go:195] Run: which crictl
	I0911 11:10:28.002218 2230538 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:10:28.012053 2230538 cache_images.go:116] "registry.k8s.io/coredns:1.6.7" needs transfer: "registry.k8s.io/coredns:1.6.7" does not exist at hash "67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5" in container runtime
	I0911 11:10:28.012103 2230538 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.7
	I0911 11:10:28.012175 2230538 ssh_runner.go:195] Run: which crictl
	I0911 11:10:28.042459 2230538 cache_images.go:116] "registry.k8s.io/etcd:3.4.3-0" needs transfer: "registry.k8s.io/etcd:3.4.3-0" does not exist at hash "303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f" in container runtime
	I0911 11:10:28.042524 2230538 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.3-0
	I0911 11:10:28.042579 2230538 ssh_runner.go:195] Run: which crictl
	I0911 11:10:28.067281 2230538 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.18.20" needs transfer: "registry.k8s.io/kube-apiserver:v1.18.20" does not exist at hash "7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1" in container runtime
	I0911 11:10:28.067349 2230538 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.18.20
	I0911 11:10:28.067410 2230538 ssh_runner.go:195] Run: which crictl
	I0911 11:10:28.067418 2230538 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.18.20" needs transfer: "registry.k8s.io/kube-proxy:v1.18.20" does not exist at hash "27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba" in container runtime
	I0911 11:10:28.067464 2230538 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.18.20
	I0911 11:10:28.067513 2230538 ssh_runner.go:195] Run: which crictl
	I0911 11:10:28.067520 2230538 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.18.20" needs transfer: "registry.k8s.io/kube-controller-manager:v1.18.20" does not exist at hash "e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290" in container runtime
	I0911 11:10:28.067549 2230538 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 11:10:28.067551 2230538 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0911 11:10:28.067596 2230538 ssh_runner.go:195] Run: which crictl
	I0911 11:10:28.067611 2230538 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0911 11:10:28.067635 2230538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.18.20
	I0911 11:10:28.067639 2230538 ssh_runner.go:195] Run: which crictl
	I0911 11:10:28.210763 2230538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.7
	I0911 11:10:28.210810 2230538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.3-0
	I0911 11:10:28.210836 2230538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.18.20
	I0911 11:10:28.210884 2230538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0911 11:10:28.210886 2230538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.18.20
	I0911 11:10:28.210960 2230538 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20
	I0911 11:10:28.211041 2230538 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.18.20
	I0911 11:10:28.315819 2230538 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.7
	I0911 11:10:28.315850 2230538 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0911 11:10:28.315876 2230538 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.18.20
	I0911 11:10:28.315918 2230538 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.18.20
	I0911 11:10:28.320737 2230538 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0911 11:10:28.320831 2230538 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.18.20
	I0911 11:10:28.320882 2230538 cache_images.go:92] LoadImages completed in 623.535333ms
	W0911 11:10:28.320977 2230538 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.18.20: no such file or directory
	I0911 11:10:28.321055 2230538 ssh_runner.go:195] Run: crio config
	I0911 11:10:28.385065 2230538 cni.go:84] Creating CNI manager for ""
	I0911 11:10:28.385087 2230538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 11:10:28.385107 2230538 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:10:28.385129 2230538 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.127 APIServerPort:8443 KubernetesVersion:v1.18.20 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ingress-addon-legacy-508741 NodeName:ingress-addon-legacy-508741 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.127"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.127 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0911 11:10:28.385272 2230538 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.127
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "ingress-addon-legacy-508741"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.127
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.127"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.18.20
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:10:28.385358 2230538 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.18.20/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=ingress-addon-legacy-508741 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.39.127
	
	[Install]
	 config:
	{KubernetesVersion:v1.18.20 ClusterName:ingress-addon-legacy-508741 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:10:28.385418 2230538 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.18.20
	I0911 11:10:28.394929 2230538 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:10:28.395008 2230538 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:10:28.404873 2230538 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (436 bytes)
	I0911 11:10:28.421683 2230538 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (353 bytes)
	I0911 11:10:28.438857 2230538 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2129 bytes)
	I0911 11:10:28.456123 2230538 ssh_runner.go:195] Run: grep 192.168.39.127	control-plane.minikube.internal$ /etc/hosts
	I0911 11:10:28.460362 2230538 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.127	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:10:28.473986 2230538 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741 for IP: 192.168.39.127
	I0911 11:10:28.474028 2230538 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:10:28.474227 2230538 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 11:10:28.474279 2230538 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 11:10:28.474352 2230538 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.key
	I0911 11:10:28.474370 2230538 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt with IP's: []
	I0911 11:10:28.768108 2230538 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt ...
	I0911 11:10:28.768144 2230538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: {Name:mk5e56098959a98ddd846990e1e0d3d8c142b921 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:10:28.768342 2230538 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.key ...
	I0911 11:10:28.768353 2230538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.key: {Name:mk047152feca4f64a1b664a05ce186b6cc5121f6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:10:28.768434 2230538 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.key.fc44e371
	I0911 11:10:28.768450 2230538 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.crt.fc44e371 with IP's: [192.168.39.127 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 11:10:28.875436 2230538 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.crt.fc44e371 ...
	I0911 11:10:28.875475 2230538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.crt.fc44e371: {Name:mk8d4b180f1eff7f04e898d05bca49ce6c180a10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:10:28.875649 2230538 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.key.fc44e371 ...
	I0911 11:10:28.875659 2230538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.key.fc44e371: {Name:mk668e5d2c8e571accd8bf4ad53cb70d6652338a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:10:28.875729 2230538 certs.go:337] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.crt.fc44e371 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.crt
	I0911 11:10:28.875822 2230538 certs.go:341] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.key.fc44e371 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.key
	I0911 11:10:28.875882 2230538 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/proxy-client.key
	I0911 11:10:28.875905 2230538 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/proxy-client.crt with IP's: []
	I0911 11:10:29.032447 2230538 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/proxy-client.crt ...
	I0911 11:10:29.032487 2230538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/proxy-client.crt: {Name:mk8c38df779b317925b941d8f4e499ad2732328f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:10:29.032688 2230538 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/proxy-client.key ...
	I0911 11:10:29.032701 2230538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/proxy-client.key: {Name:mk3442bbc7a8e504135012fcf9ca648ea72cac33 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:10:29.032783 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0911 11:10:29.032807 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0911 11:10:29.032840 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0911 11:10:29.032857 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0911 11:10:29.032868 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0911 11:10:29.032879 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0911 11:10:29.032891 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0911 11:10:29.032902 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0911 11:10:29.032958 2230538 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 11:10:29.032995 2230538 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 11:10:29.033006 2230538 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 11:10:29.033030 2230538 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:10:29.033055 2230538 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:10:29.033084 2230538 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 11:10:29.033130 2230538 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:10:29.033158 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:10:29.033172 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem -> /usr/share/ca-certificates/2222471.pem
	I0911 11:10:29.033181 2230538 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> /usr/share/ca-certificates/22224712.pem
	I0911 11:10:29.033897 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:10:29.058856 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 11:10:29.083714 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:10:29.107067 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0911 11:10:29.130355 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:10:29.153735 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 11:10:29.177188 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:10:29.200493 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:10:29.223846 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:10:29.246499 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 11:10:29.269358 2230538 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 11:10:29.292574 2230538 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:10:29.309064 2230538 ssh_runner.go:195] Run: openssl version
	I0911 11:10:29.315253 2230538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 11:10:29.325776 2230538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 11:10:29.330875 2230538 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:10:29.332596 2230538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 11:10:29.338372 2230538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:10:29.348055 2230538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:10:29.357835 2230538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:10:29.362843 2230538 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:10:29.362903 2230538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:10:29.368725 2230538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:10:29.379357 2230538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 11:10:29.389699 2230538 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 11:10:29.394382 2230538 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:10:29.394515 2230538 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 11:10:29.399942 2230538 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 11:10:29.409580 2230538 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:10:29.413765 2230538 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:10:29.413827 2230538 kubeadm.go:404] StartCluster: {Name:ingress-addon-legacy-508741 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4096 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.18.20 ClusterName:ingress-addon-legacy-508741 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:10:29.413921 2230538 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:10:29.413964 2230538 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:10:29.443926 2230538 cri.go:89] found id: ""
	I0911 11:10:29.444027 2230538 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 11:10:29.452927 2230538 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 11:10:29.461479 2230538 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 11:10:29.469747 2230538 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:10:29.469810 2230538 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.18.20:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU"
	I0911 11:10:29.531635 2230538 kubeadm.go:322] W0911 11:10:29.524546     966 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
	I0911 11:10:29.667461 2230538 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 11:10:32.587402 2230538 kubeadm.go:322] W0911 11:10:32.581971     966 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0911 11:10:32.588529 2230538 kubeadm.go:322] W0911 11:10:32.583110     966 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
	I0911 11:10:43.083352 2230538 kubeadm.go:322] [init] Using Kubernetes version: v1.18.20
	I0911 11:10:43.083438 2230538 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 11:10:43.083547 2230538 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 11:10:43.083712 2230538 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 11:10:43.083849 2230538 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 11:10:43.083990 2230538 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:10:43.084094 2230538 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:10:43.084130 2230538 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 11:10:43.084182 2230538 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 11:10:43.085983 2230538 out.go:204]   - Generating certificates and keys ...
	I0911 11:10:43.086058 2230538 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 11:10:43.086117 2230538 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 11:10:43.086172 2230538 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 11:10:43.086223 2230538 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 11:10:43.086309 2230538 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 11:10:43.086369 2230538 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 11:10:43.086415 2230538 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 11:10:43.086546 2230538 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [ingress-addon-legacy-508741 localhost] and IPs [192.168.39.127 127.0.0.1 ::1]
	I0911 11:10:43.086591 2230538 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 11:10:43.086700 2230538 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [ingress-addon-legacy-508741 localhost] and IPs [192.168.39.127 127.0.0.1 ::1]
	I0911 11:10:43.086787 2230538 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 11:10:43.086868 2230538 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 11:10:43.086952 2230538 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 11:10:43.087022 2230538 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 11:10:43.087071 2230538 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 11:10:43.087127 2230538 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 11:10:43.087178 2230538 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 11:10:43.087223 2230538 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 11:10:43.087277 2230538 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 11:10:43.089840 2230538 out.go:204]   - Booting up control plane ...
	I0911 11:10:43.089931 2230538 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 11:10:43.089995 2230538 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 11:10:43.090067 2230538 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 11:10:43.090158 2230538 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 11:10:43.090287 2230538 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 11:10:43.090367 2230538 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.003407 seconds
	I0911 11:10:43.090509 2230538 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 11:10:43.090634 2230538 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 11:10:43.090708 2230538 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 11:10:43.090865 2230538 kubeadm.go:322] [mark-control-plane] Marking the node ingress-addon-legacy-508741 as control-plane by adding the label "node-role.kubernetes.io/master=''"
	I0911 11:10:43.090921 2230538 kubeadm.go:322] [bootstrap-token] Using token: zn6w6r.8u28qb9fld6k3nlk
	I0911 11:10:43.092492 2230538 out.go:204]   - Configuring RBAC rules ...
	I0911 11:10:43.092621 2230538 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 11:10:43.092716 2230538 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 11:10:43.092855 2230538 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 11:10:43.093019 2230538 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 11:10:43.093126 2230538 kubeadm.go:322] [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 11:10:43.093201 2230538 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 11:10:43.093296 2230538 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 11:10:43.093353 2230538 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 11:10:43.093400 2230538 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 11:10:43.093406 2230538 kubeadm.go:322] 
	I0911 11:10:43.093453 2230538 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 11:10:43.093477 2230538 kubeadm.go:322] 
	I0911 11:10:43.093541 2230538 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 11:10:43.093548 2230538 kubeadm.go:322] 
	I0911 11:10:43.093568 2230538 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 11:10:43.093617 2230538 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 11:10:43.093677 2230538 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 11:10:43.093688 2230538 kubeadm.go:322] 
	I0911 11:10:43.093735 2230538 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 11:10:43.093859 2230538 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 11:10:43.093928 2230538 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 11:10:43.093938 2230538 kubeadm.go:322] 
	I0911 11:10:43.094008 2230538 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 11:10:43.094079 2230538 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 11:10:43.094086 2230538 kubeadm.go:322] 
	I0911 11:10:43.094160 2230538 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token zn6w6r.8u28qb9fld6k3nlk \
	I0911 11:10:43.094254 2230538 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 11:10:43.094275 2230538 kubeadm.go:322]     --control-plane 
	I0911 11:10:43.094281 2230538 kubeadm.go:322] 
	I0911 11:10:43.094372 2230538 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 11:10:43.094383 2230538 kubeadm.go:322] 
	I0911 11:10:43.094493 2230538 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token zn6w6r.8u28qb9fld6k3nlk \
	I0911 11:10:43.094653 2230538 kubeadm.go:322]     --discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 11:10:43.094665 2230538 cni.go:84] Creating CNI manager for ""
	I0911 11:10:43.094674 2230538 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 11:10:43.096349 2230538 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 11:10:43.097683 2230538 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 11:10:43.109435 2230538 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 11:10:43.131598 2230538 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 11:10:43.131732 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:43.131742 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=ingress-addon-legacy-508741 minikube.k8s.io/updated_at=2023_09_11T11_10_43_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:43.311579 2230538 ops.go:34] apiserver oom_adj: -16
	I0911 11:10:43.311619 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:43.478219 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:44.188172 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:44.689096 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:45.188679 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:45.688714 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:46.188951 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:46.688551 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:47.188728 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:47.688753 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:48.188226 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:48.688625 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:49.189053 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:49.689051 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:50.188222 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:50.688109 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:51.188536 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:51.689026 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:52.188551 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:52.688973 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:53.188075 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:53.688949 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:54.189157 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:54.688965 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:55.188994 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:55.688940 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:56.188904 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:56.688754 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:57.188140 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:57.688547 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:58.188572 2230538 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.18.20/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:10:58.381227 2230538 kubeadm.go:1081] duration metric: took 15.249569163s to wait for elevateKubeSystemPrivileges.
	I0911 11:10:58.381276 2230538 kubeadm.go:406] StartCluster complete in 28.967453974s
	I0911 11:10:58.381302 2230538 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:10:58.381408 2230538 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:10:58.382431 2230538 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:10:58.382716 2230538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 11:10:58.382729 2230538 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 11:10:58.382852 2230538 addons.go:69] Setting storage-provisioner=true in profile "ingress-addon-legacy-508741"
	I0911 11:10:58.382874 2230538 addons.go:231] Setting addon storage-provisioner=true in "ingress-addon-legacy-508741"
	I0911 11:10:58.382881 2230538 addons.go:69] Setting default-storageclass=true in profile "ingress-addon-legacy-508741"
	I0911 11:10:58.382904 2230538 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ingress-addon-legacy-508741"
	I0911 11:10:58.382953 2230538 host.go:66] Checking if "ingress-addon-legacy-508741" exists ...
	I0911 11:10:58.382968 2230538 config.go:182] Loaded profile config "ingress-addon-legacy-508741": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.18.20
	I0911 11:10:58.383447 2230538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:10:58.383466 2230538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:10:58.383480 2230538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:10:58.383504 2230538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:10:58.383428 2230538 kapi.go:59] client config for ingress-addon-legacy-508741: &rest.Config{Host:"https://192.168.39.127:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData
:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:10:58.384439 2230538 cert_rotation.go:137] Starting client certificate rotation controller
	I0911 11:10:58.401202 2230538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41295
	I0911 11:10:58.401727 2230538 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:10:58.402234 2230538 main.go:141] libmachine: Using API Version  1
	I0911 11:10:58.402269 2230538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:10:58.402612 2230538 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:10:58.403218 2230538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:10:58.403252 2230538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:10:58.403546 2230538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40035
	I0911 11:10:58.404096 2230538 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:10:58.404700 2230538 main.go:141] libmachine: Using API Version  1
	I0911 11:10:58.404728 2230538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:10:58.405149 2230538 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:10:58.405339 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetState
	I0911 11:10:58.408490 2230538 kapi.go:59] client config for ingress-addon-legacy-508741: &rest.Config{Host:"https://192.168.39.127:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData
:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:10:58.411789 2230538 addons.go:231] Setting addon default-storageclass=true in "ingress-addon-legacy-508741"
	I0911 11:10:58.411840 2230538 host.go:66] Checking if "ingress-addon-legacy-508741" exists ...
	I0911 11:10:58.412219 2230538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:10:58.412268 2230538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:10:58.420864 2230538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43915
	I0911 11:10:58.421336 2230538 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:10:58.421925 2230538 main.go:141] libmachine: Using API Version  1
	I0911 11:10:58.421956 2230538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:10:58.422303 2230538 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:10:58.422536 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetState
	I0911 11:10:58.424366 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .DriverName
	I0911 11:10:58.426659 2230538 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:10:58.428525 2230538 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 11:10:58.428544 2230538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 11:10:58.428566 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:58.429147 2230538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34103
	I0911 11:10:58.429680 2230538 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:10:58.430241 2230538 main.go:141] libmachine: Using API Version  1
	I0911 11:10:58.430268 2230538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:10:58.430719 2230538 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:10:58.431381 2230538 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:10:58.431431 2230538 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:10:58.432590 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:58.432998 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:58.433038 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:58.433364 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHPort
	I0911 11:10:58.433560 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:58.433747 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHUsername
	I0911 11:10:58.433902 2230538 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741/id_rsa Username:docker}
	I0911 11:10:58.447709 2230538 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37499
	I0911 11:10:58.448130 2230538 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:10:58.448759 2230538 main.go:141] libmachine: Using API Version  1
	I0911 11:10:58.448784 2230538 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:10:58.449146 2230538 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:10:58.449394 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetState
	I0911 11:10:58.451163 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .DriverName
	I0911 11:10:58.451435 2230538 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 11:10:58.451450 2230538 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 11:10:58.451468 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHHostname
	I0911 11:10:58.454414 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:58.454833 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:24:c1:5a", ip: ""} in network mk-ingress-addon-legacy-508741: {Iface:virbr1 ExpiryTime:2023-09-11 12:10:10 +0000 UTC Type:0 Mac:52:54:00:24:c1:5a Iaid: IPaddr:192.168.39.127 Prefix:24 Hostname:ingress-addon-legacy-508741 Clientid:01:52:54:00:24:c1:5a}
	I0911 11:10:58.454876 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | domain ingress-addon-legacy-508741 has defined IP address 192.168.39.127 and MAC address 52:54:00:24:c1:5a in network mk-ingress-addon-legacy-508741
	I0911 11:10:58.455120 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHPort
	I0911 11:10:58.455300 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHKeyPath
	I0911 11:10:58.455493 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .GetSSHUsername
	I0911 11:10:58.455620 2230538 sshutil.go:53] new ssh client: &{IP:192.168.39.127 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/ingress-addon-legacy-508741/id_rsa Username:docker}
	I0911 11:10:58.464759 2230538 kapi.go:248] "coredns" deployment in "kube-system" namespace and "ingress-addon-legacy-508741" context rescaled to 1 replicas
	I0911 11:10:58.464800 2230538 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.127 Port:8443 KubernetesVersion:v1.18.20 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 11:10:58.466624 2230538 out.go:177] * Verifying Kubernetes components...
	I0911 11:10:58.468304 2230538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:10:58.591108 2230538 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.18.20/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 11:10:58.591628 2230538 kapi.go:59] client config for ingress-addon-legacy-508741: &rest.Config{Host:"https://192.168.39.127:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData
:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:10:58.591915 2230538 node_ready.go:35] waiting up to 6m0s for node "ingress-addon-legacy-508741" to be "Ready" ...
	I0911 11:10:58.626571 2230538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 11:10:58.659882 2230538 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.18.20/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 11:10:58.720742 2230538 node_ready.go:49] node "ingress-addon-legacy-508741" has status "Ready":"True"
	I0911 11:10:58.720772 2230538 node_ready.go:38] duration metric: took 128.838534ms waiting for node "ingress-addon-legacy-508741" to be "Ready" ...
	I0911 11:10:58.720781 2230538 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:10:58.879168 2230538 pod_ready.go:78] waiting up to 6m0s for pod "coredns-66bff467f8-5ff5m" in "kube-system" namespace to be "Ready" ...
	I0911 11:10:59.292320 2230538 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0911 11:10:59.330377 2230538 main.go:141] libmachine: Making call to close driver server
	I0911 11:10:59.332198 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .Close
	I0911 11:10:59.330463 2230538 main.go:141] libmachine: Making call to close driver server
	I0911 11:10:59.332285 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .Close
	I0911 11:10:59.332570 2230538 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:10:59.332589 2230538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:10:59.332612 2230538 main.go:141] libmachine: Making call to close driver server
	I0911 11:10:59.332623 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .Close
	I0911 11:10:59.332625 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Closing plugin on server side
	I0911 11:10:59.332650 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Closing plugin on server side
	I0911 11:10:59.332662 2230538 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:10:59.332671 2230538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:10:59.332680 2230538 main.go:141] libmachine: Making call to close driver server
	I0911 11:10:59.332689 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .Close
	I0911 11:10:59.332855 2230538 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:10:59.332877 2230538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:10:59.332954 2230538 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:10:59.332965 2230538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:10:59.332982 2230538 main.go:141] libmachine: Making call to close driver server
	I0911 11:10:59.332990 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) Calling .Close
	I0911 11:10:59.333195 2230538 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:10:59.333211 2230538 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:10:59.333216 2230538 main.go:141] libmachine: (ingress-addon-legacy-508741) DBG | Closing plugin on server side
	I0911 11:10:59.335139 2230538 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0911 11:10:59.337239 2230538 addons.go:502] enable addons completed in 954.500433ms: enabled=[storage-provisioner default-storageclass]
	I0911 11:11:00.907779 2230538 pod_ready.go:102] pod "coredns-66bff467f8-5ff5m" in "kube-system" namespace has status "Ready":"False"
	I0911 11:11:03.401498 2230538 pod_ready.go:102] pod "coredns-66bff467f8-5ff5m" in "kube-system" namespace has status "Ready":"False"
	I0911 11:11:05.403875 2230538 pod_ready.go:102] pod "coredns-66bff467f8-5ff5m" in "kube-system" namespace has status "Ready":"False"
	I0911 11:11:06.400980 2230538 pod_ready.go:92] pod "coredns-66bff467f8-5ff5m" in "kube-system" namespace has status "Ready":"True"
	I0911 11:11:06.401003 2230538 pod_ready.go:81] duration metric: took 7.521788761s waiting for pod "coredns-66bff467f8-5ff5m" in "kube-system" namespace to be "Ready" ...
	I0911 11:11:06.401013 2230538 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ingress-addon-legacy-508741" in "kube-system" namespace to be "Ready" ...
	I0911 11:11:06.413438 2230538 pod_ready.go:92] pod "etcd-ingress-addon-legacy-508741" in "kube-system" namespace has status "Ready":"True"
	I0911 11:11:06.413463 2230538 pod_ready.go:81] duration metric: took 12.444342ms waiting for pod "etcd-ingress-addon-legacy-508741" in "kube-system" namespace to be "Ready" ...
	I0911 11:11:06.413473 2230538 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ingress-addon-legacy-508741" in "kube-system" namespace to be "Ready" ...
	I0911 11:11:06.420073 2230538 pod_ready.go:92] pod "kube-apiserver-ingress-addon-legacy-508741" in "kube-system" namespace has status "Ready":"True"
	I0911 11:11:06.420096 2230538 pod_ready.go:81] duration metric: took 6.617418ms waiting for pod "kube-apiserver-ingress-addon-legacy-508741" in "kube-system" namespace to be "Ready" ...
	I0911 11:11:06.420106 2230538 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ingress-addon-legacy-508741" in "kube-system" namespace to be "Ready" ...
	I0911 11:11:06.426015 2230538 pod_ready.go:92] pod "kube-controller-manager-ingress-addon-legacy-508741" in "kube-system" namespace has status "Ready":"True"
	I0911 11:11:06.426046 2230538 pod_ready.go:81] duration metric: took 5.931899ms waiting for pod "kube-controller-manager-ingress-addon-legacy-508741" in "kube-system" namespace to be "Ready" ...
	I0911 11:11:06.426060 2230538 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-wb62q" in "kube-system" namespace to be "Ready" ...
	I0911 11:11:06.431765 2230538 pod_ready.go:92] pod "kube-proxy-wb62q" in "kube-system" namespace has status "Ready":"True"
	I0911 11:11:06.431792 2230538 pod_ready.go:81] duration metric: took 5.724703ms waiting for pod "kube-proxy-wb62q" in "kube-system" namespace to be "Ready" ...
	I0911 11:11:06.431806 2230538 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ingress-addon-legacy-508741" in "kube-system" namespace to be "Ready" ...
	I0911 11:11:06.595205 2230538 request.go:629] Waited for 163.277396ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.127:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ingress-addon-legacy-508741
	I0911 11:11:06.795470 2230538 request.go:629] Waited for 196.374472ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.127:8443/api/v1/nodes/ingress-addon-legacy-508741
	I0911 11:11:06.799394 2230538 pod_ready.go:92] pod "kube-scheduler-ingress-addon-legacy-508741" in "kube-system" namespace has status "Ready":"True"
	I0911 11:11:06.799429 2230538 pod_ready.go:81] duration metric: took 367.614681ms waiting for pod "kube-scheduler-ingress-addon-legacy-508741" in "kube-system" namespace to be "Ready" ...
	I0911 11:11:06.799442 2230538 pod_ready.go:38] duration metric: took 8.078645178s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:11:06.799468 2230538 api_server.go:52] waiting for apiserver process to appear ...
	I0911 11:11:06.799546 2230538 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:11:06.813215 2230538 api_server.go:72] duration metric: took 8.348326992s to wait for apiserver process to appear ...
	I0911 11:11:06.813251 2230538 api_server.go:88] waiting for apiserver healthz status ...
	I0911 11:11:06.813274 2230538 api_server.go:253] Checking apiserver healthz at https://192.168.39.127:8443/healthz ...
	I0911 11:11:06.819153 2230538 api_server.go:279] https://192.168.39.127:8443/healthz returned 200:
	ok
	I0911 11:11:06.820304 2230538 api_server.go:141] control plane version: v1.18.20
	I0911 11:11:06.820329 2230538 api_server.go:131] duration metric: took 7.070787ms to wait for apiserver health ...
	I0911 11:11:06.820342 2230538 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 11:11:06.995715 2230538 request.go:629] Waited for 175.288565ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.127:8443/api/v1/namespaces/kube-system/pods
	I0911 11:11:07.002102 2230538 system_pods.go:59] 7 kube-system pods found
	I0911 11:11:07.002132 2230538 system_pods.go:61] "coredns-66bff467f8-5ff5m" [eb5ac4fa-0a89-462f-b36f-73552e086263] Running
	I0911 11:11:07.002137 2230538 system_pods.go:61] "etcd-ingress-addon-legacy-508741" [fda48638-d171-41af-a29a-9c54d371d1d3] Running
	I0911 11:11:07.002147 2230538 system_pods.go:61] "kube-apiserver-ingress-addon-legacy-508741" [1a7f0e92-a4ba-4d00-806e-ab54ef75d460] Running
	I0911 11:11:07.002151 2230538 system_pods.go:61] "kube-controller-manager-ingress-addon-legacy-508741" [216a2a08-01d7-496a-b2ad-c108021f4463] Running
	I0911 11:11:07.002158 2230538 system_pods.go:61] "kube-proxy-wb62q" [3ce8ec91-13ca-4a85-89da-a205408e7d0b] Running
	I0911 11:11:07.002162 2230538 system_pods.go:61] "kube-scheduler-ingress-addon-legacy-508741" [769fe418-2704-4458-95a9-ad46d5f7c0f3] Running
	I0911 11:11:07.002166 2230538 system_pods.go:61] "storage-provisioner" [1fef8f04-0694-4672-b7e2-b88e0cd2bdf6] Running
	I0911 11:11:07.002172 2230538 system_pods.go:74] duration metric: took 181.824372ms to wait for pod list to return data ...
	I0911 11:11:07.002180 2230538 default_sa.go:34] waiting for default service account to be created ...
	I0911 11:11:07.195621 2230538 request.go:629] Waited for 193.363976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.127:8443/api/v1/namespaces/default/serviceaccounts
	I0911 11:11:07.198783 2230538 default_sa.go:45] found service account: "default"
	I0911 11:11:07.198813 2230538 default_sa.go:55] duration metric: took 196.627421ms for default service account to be created ...
	I0911 11:11:07.198824 2230538 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 11:11:07.395260 2230538 request.go:629] Waited for 196.329154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.127:8443/api/v1/namespaces/kube-system/pods
	I0911 11:11:07.402457 2230538 system_pods.go:86] 7 kube-system pods found
	I0911 11:11:07.402489 2230538 system_pods.go:89] "coredns-66bff467f8-5ff5m" [eb5ac4fa-0a89-462f-b36f-73552e086263] Running
	I0911 11:11:07.402495 2230538 system_pods.go:89] "etcd-ingress-addon-legacy-508741" [fda48638-d171-41af-a29a-9c54d371d1d3] Running
	I0911 11:11:07.402500 2230538 system_pods.go:89] "kube-apiserver-ingress-addon-legacy-508741" [1a7f0e92-a4ba-4d00-806e-ab54ef75d460] Running
	I0911 11:11:07.402505 2230538 system_pods.go:89] "kube-controller-manager-ingress-addon-legacy-508741" [216a2a08-01d7-496a-b2ad-c108021f4463] Running
	I0911 11:11:07.402509 2230538 system_pods.go:89] "kube-proxy-wb62q" [3ce8ec91-13ca-4a85-89da-a205408e7d0b] Running
	I0911 11:11:07.402513 2230538 system_pods.go:89] "kube-scheduler-ingress-addon-legacy-508741" [769fe418-2704-4458-95a9-ad46d5f7c0f3] Running
	I0911 11:11:07.402520 2230538 system_pods.go:89] "storage-provisioner" [1fef8f04-0694-4672-b7e2-b88e0cd2bdf6] Running
	I0911 11:11:07.402527 2230538 system_pods.go:126] duration metric: took 203.697951ms to wait for k8s-apps to be running ...
	I0911 11:11:07.402534 2230538 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:11:07.402586 2230538 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:11:07.416011 2230538 system_svc.go:56] duration metric: took 13.463522ms WaitForService to wait for kubelet.
	I0911 11:11:07.416049 2230538 kubeadm.go:581] duration metric: took 8.951167715s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:11:07.416079 2230538 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:11:07.595627 2230538 request.go:629] Waited for 179.421676ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.127:8443/api/v1/nodes
	I0911 11:11:07.599396 2230538 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:11:07.599431 2230538 node_conditions.go:123] node cpu capacity is 2
	I0911 11:11:07.599451 2230538 node_conditions.go:105] duration metric: took 183.367107ms to run NodePressure ...
	I0911 11:11:07.599472 2230538 start.go:228] waiting for startup goroutines ...
	I0911 11:11:07.599479 2230538 start.go:233] waiting for cluster config update ...
	I0911 11:11:07.599497 2230538 start.go:242] writing updated cluster config ...
	I0911 11:11:07.599857 2230538 ssh_runner.go:195] Run: rm -f paused
	I0911 11:11:07.652563 2230538 start.go:600] kubectl: 1.28.1, cluster: 1.18.20 (minor skew: 10)
	I0911 11:11:07.655083 2230538 out.go:177] 
	W0911 11:11:07.656767 2230538 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.18.20.
	I0911 11:11:07.658319 2230538 out.go:177]   - Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
	I0911 11:11:07.659943 2230538 out.go:177] * Done! kubectl is now configured to use "ingress-addon-legacy-508741" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 11:10:07 UTC, ends at Mon 2023-09-11 11:14:08 UTC. --
	Sep 11 11:14:07 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:07.957920381Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ca256ccbd277ec584578e5940aac169950e435b6067cbaa63e69ba1c68aa6e,PodSandboxId:d9e13cde300b49dfc4ce618f91b50d6d52954f82cc6867731a7db84d3fb41028,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430840217402916,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-q4czg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 262dfcb5-ae2a-4e49-b7c4-d339014404bf,},Annotations:map[string]string{io.kubernetes.container.hash: 97b95acb,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:130abd87bb42310c29add618234ebe5da01c64b01b090edaca597a94f2328505,PodSandboxId:59877e8e5b82297030738d4f89ffa254222d757514d091d03cb20d65b13722ea,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694430698906891737,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc3c1e4e-9bec-426e-8070-6c2956a4e987,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f727c55f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7916aea0b44cd825176e82f3e9d27a8b73a15257b16a5e1ab6c1f861c4b7d,PodSandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694430690400471410,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394818a33e6e14567919a296677c87b7d0523138a5e1454ea296ee388ff66829,PodSandboxId:aa31c38c9c5b2a78469f5f30113d494f9ff7e4c25338ce4b4d5d120985f4a439,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694430681337670077,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-djfgh,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d29d7caf-f1ec-4ce7-b53e-f34d54576705,},Annotations:map[string]string{io.kubernetes.container.hash: f036d157,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cdeb432d94c95b20119545a8f8a70101fd7481a6ddaa05ce91b6d3d451566b0b,PodSandboxId:55e54c4564cb57dad6bc551c08c038d4208526366127b954680173fa4d246534,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672470627402,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4sdb9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9853694-4198-46cc-8659-9abc234e6b4e,},Annotations:map[string]string{io.kubernetes.container.hash: cf82230e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fd895fceff8f96c6ccb9ec6f764b3b056b6279c654fcb5140ce6c8c81e864d6,PodSandboxId:7213f0e2a5533a78c98c2132eb2922d11012049dea41c21464b4ec3ecf15a7c9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672294186839,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-926hq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 44d61526-bd9d-416a-a96d-48ec138f15f5,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9fbee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf379833a90d60d890053d954df61daa2c63eb4c19539682f776eee8632d3e13,PodSandboxId:0cde4a441610f6db93f16d3d039b4817feb97a5c11d97690fd0bc08fa48ee19c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694430661461015741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5ff5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5ac4fa-0a89-462f-b36f-73552e086263,},Annotations:map[string]string{io.kubernetes.container.hash: b822fea3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdef59503f7cf4f4639540782896
4f0acb95cf7412df0fd42d89fe20f4e9957b,PodSandboxId:b8a4d65b36e34d4748422f978951bd0f26ef2664beea29d86f3ccf41df3ea58b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694430660789100316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wb62q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce8ec91-13ca-4a85-89da-a205408e7d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5df220,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092d16c2a45f2c43ea18093c4ecc9506cddff013f129beebc7bd40a2e9b4f86b,Pod
SandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694430660007682147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e704de1c4376fb54a1140b97021211c974fbb162d24d1d6e89aa0c7001aa3036,PodSa
ndboxId:a035c1e15516a327a6e622de65304840683392169b7e601760d9f55a7c751691,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694430635492895656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5bc2a033b0924031bda4ef90b7f644b,},Annotations:map[string]string{io.kubernetes.container.hash: c99ccfd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb7fd9df36590a35b8e5446cba13efee6e3e2c6d62cc352eaa41b86ea1b0d1a,PodSandboxId:df8b48e71008f9d033df1ce3ab4aed79988fa4
15422d6e76726cc5d941a5ad2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694430635247491478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb5e9edefab89db41ab4cec2d01404a6d28870a2e62df2b9864c0a329926b07,PodSandboxId:77133b97584e9795d0b4343b99bd2b899be6a036bea7
dc765672b3b7ac6faa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694430634740991747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4aa5676666a253c05aaffbf48e4d4d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c86d5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da86dffd8ffa534ba72df99119308a97695e8c8e6a826962aecdd57e3d30b2da,PodSandboxId:fc945f99203e875de894f8f8ac18e8f2e5971a8a3d4cb96b03
f5f828df07d8f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694430634761542771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=81b374cb-bf83-4e4c-b7a4-ccd80cb63a5c name=/runtime.v1alpha2.Runtim
eService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.274210434Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2eccbe56-48a7-449a-933c-bd13f0d9af4b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.274274625Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2eccbe56-48a7-449a-933c-bd13f0d9af4b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.274683782Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ca256ccbd277ec584578e5940aac169950e435b6067cbaa63e69ba1c68aa6e,PodSandboxId:d9e13cde300b49dfc4ce618f91b50d6d52954f82cc6867731a7db84d3fb41028,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430840217402916,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-q4czg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 262dfcb5-ae2a-4e49-b7c4-d339014404bf,},Annotations:map[string]string{io.kubernetes.container.hash: 97b95acb,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:130abd87bb42310c29add618234ebe5da01c64b01b090edaca597a94f2328505,PodSandboxId:59877e8e5b82297030738d4f89ffa254222d757514d091d03cb20d65b13722ea,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694430698906891737,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc3c1e4e-9bec-426e-8070-6c2956a4e987,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f727c55f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7916aea0b44cd825176e82f3e9d27a8b73a15257b16a5e1ab6c1f861c4b7d,PodSandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694430690400471410,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394818a33e6e14567919a296677c87b7d0523138a5e1454ea296ee388ff66829,PodSandboxId:aa31c38c9c5b2a78469f5f30113d494f9ff7e4c25338ce4b4d5d120985f4a439,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694430681337670077,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-djfgh,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d29d7caf-f1ec-4ce7-b53e-f34d54576705,},Annotations:map[string]string{io.kubernetes.container.hash: f036d157,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cdeb432d94c95b20119545a8f8a70101fd7481a6ddaa05ce91b6d3d451566b0b,PodSandboxId:55e54c4564cb57dad6bc551c08c038d4208526366127b954680173fa4d246534,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672470627402,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4sdb9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9853694-4198-46cc-8659-9abc234e6b4e,},Annotations:map[string]string{io.kubernetes.container.hash: cf82230e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fd895fceff8f96c6ccb9ec6f764b3b056b6279c654fcb5140ce6c8c81e864d6,PodSandboxId:7213f0e2a5533a78c98c2132eb2922d11012049dea41c21464b4ec3ecf15a7c9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672294186839,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-926hq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 44d61526-bd9d-416a-a96d-48ec138f15f5,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9fbee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf379833a90d60d890053d954df61daa2c63eb4c19539682f776eee8632d3e13,PodSandboxId:0cde4a441610f6db93f16d3d039b4817feb97a5c11d97690fd0bc08fa48ee19c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694430661461015741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5ff5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5ac4fa-0a89-462f-b36f-73552e086263,},Annotations:map[string]string{io.kubernetes.container.hash: b822fea3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdef59503f7cf4f4639540782896
4f0acb95cf7412df0fd42d89fe20f4e9957b,PodSandboxId:b8a4d65b36e34d4748422f978951bd0f26ef2664beea29d86f3ccf41df3ea58b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694430660789100316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wb62q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce8ec91-13ca-4a85-89da-a205408e7d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5df220,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092d16c2a45f2c43ea18093c4ecc9506cddff013f129beebc7bd40a2e9b4f86b,Pod
SandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694430660007682147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e704de1c4376fb54a1140b97021211c974fbb162d24d1d6e89aa0c7001aa3036,PodSa
ndboxId:a035c1e15516a327a6e622de65304840683392169b7e601760d9f55a7c751691,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694430635492895656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5bc2a033b0924031bda4ef90b7f644b,},Annotations:map[string]string{io.kubernetes.container.hash: c99ccfd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb7fd9df36590a35b8e5446cba13efee6e3e2c6d62cc352eaa41b86ea1b0d1a,PodSandboxId:df8b48e71008f9d033df1ce3ab4aed79988fa4
15422d6e76726cc5d941a5ad2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694430635247491478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb5e9edefab89db41ab4cec2d01404a6d28870a2e62df2b9864c0a329926b07,PodSandboxId:77133b97584e9795d0b4343b99bd2b899be6a036bea7
dc765672b3b7ac6faa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694430634740991747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4aa5676666a253c05aaffbf48e4d4d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c86d5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da86dffd8ffa534ba72df99119308a97695e8c8e6a826962aecdd57e3d30b2da,PodSandboxId:fc945f99203e875de894f8f8ac18e8f2e5971a8a3d4cb96b03
f5f828df07d8f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694430634761542771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2eccbe56-48a7-449a-933c-bd13f0d9af4b name=/runtime.v1alpha2.Runtim
eService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.311483496Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=10ffc1b5-c545-48b0-818a-678a57c57b47 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.311581451Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=10ffc1b5-c545-48b0-818a-678a57c57b47 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.311863427Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ca256ccbd277ec584578e5940aac169950e435b6067cbaa63e69ba1c68aa6e,PodSandboxId:d9e13cde300b49dfc4ce618f91b50d6d52954f82cc6867731a7db84d3fb41028,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430840217402916,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-q4czg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 262dfcb5-ae2a-4e49-b7c4-d339014404bf,},Annotations:map[string]string{io.kubernetes.container.hash: 97b95acb,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:130abd87bb42310c29add618234ebe5da01c64b01b090edaca597a94f2328505,PodSandboxId:59877e8e5b82297030738d4f89ffa254222d757514d091d03cb20d65b13722ea,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694430698906891737,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc3c1e4e-9bec-426e-8070-6c2956a4e987,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f727c55f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7916aea0b44cd825176e82f3e9d27a8b73a15257b16a5e1ab6c1f861c4b7d,PodSandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694430690400471410,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394818a33e6e14567919a296677c87b7d0523138a5e1454ea296ee388ff66829,PodSandboxId:aa31c38c9c5b2a78469f5f30113d494f9ff7e4c25338ce4b4d5d120985f4a439,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694430681337670077,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-djfgh,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d29d7caf-f1ec-4ce7-b53e-f34d54576705,},Annotations:map[string]string{io.kubernetes.container.hash: f036d157,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cdeb432d94c95b20119545a8f8a70101fd7481a6ddaa05ce91b6d3d451566b0b,PodSandboxId:55e54c4564cb57dad6bc551c08c038d4208526366127b954680173fa4d246534,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672470627402,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4sdb9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9853694-4198-46cc-8659-9abc234e6b4e,},Annotations:map[string]string{io.kubernetes.container.hash: cf82230e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fd895fceff8f96c6ccb9ec6f764b3b056b6279c654fcb5140ce6c8c81e864d6,PodSandboxId:7213f0e2a5533a78c98c2132eb2922d11012049dea41c21464b4ec3ecf15a7c9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672294186839,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-926hq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 44d61526-bd9d-416a-a96d-48ec138f15f5,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9fbee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf379833a90d60d890053d954df61daa2c63eb4c19539682f776eee8632d3e13,PodSandboxId:0cde4a441610f6db93f16d3d039b4817feb97a5c11d97690fd0bc08fa48ee19c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694430661461015741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5ff5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5ac4fa-0a89-462f-b36f-73552e086263,},Annotations:map[string]string{io.kubernetes.container.hash: b822fea3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdef59503f7cf4f4639540782896
4f0acb95cf7412df0fd42d89fe20f4e9957b,PodSandboxId:b8a4d65b36e34d4748422f978951bd0f26ef2664beea29d86f3ccf41df3ea58b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694430660789100316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wb62q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce8ec91-13ca-4a85-89da-a205408e7d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5df220,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092d16c2a45f2c43ea18093c4ecc9506cddff013f129beebc7bd40a2e9b4f86b,Pod
SandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694430660007682147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e704de1c4376fb54a1140b97021211c974fbb162d24d1d6e89aa0c7001aa3036,PodSa
ndboxId:a035c1e15516a327a6e622de65304840683392169b7e601760d9f55a7c751691,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694430635492895656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5bc2a033b0924031bda4ef90b7f644b,},Annotations:map[string]string{io.kubernetes.container.hash: c99ccfd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb7fd9df36590a35b8e5446cba13efee6e3e2c6d62cc352eaa41b86ea1b0d1a,PodSandboxId:df8b48e71008f9d033df1ce3ab4aed79988fa4
15422d6e76726cc5d941a5ad2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694430635247491478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb5e9edefab89db41ab4cec2d01404a6d28870a2e62df2b9864c0a329926b07,PodSandboxId:77133b97584e9795d0b4343b99bd2b899be6a036bea7
dc765672b3b7ac6faa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694430634740991747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4aa5676666a253c05aaffbf48e4d4d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c86d5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da86dffd8ffa534ba72df99119308a97695e8c8e6a826962aecdd57e3d30b2da,PodSandboxId:fc945f99203e875de894f8f8ac18e8f2e5971a8a3d4cb96b03
f5f828df07d8f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694430634761542771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=10ffc1b5-c545-48b0-818a-678a57c57b47 name=/runtime.v1alpha2.Runtim
eService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.347032839Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=297d5ecb-7325-493c-b52d-908cd8dde158 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.347135398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=297d5ecb-7325-493c-b52d-908cd8dde158 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.347491921Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ca256ccbd277ec584578e5940aac169950e435b6067cbaa63e69ba1c68aa6e,PodSandboxId:d9e13cde300b49dfc4ce618f91b50d6d52954f82cc6867731a7db84d3fb41028,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430840217402916,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-q4czg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 262dfcb5-ae2a-4e49-b7c4-d339014404bf,},Annotations:map[string]string{io.kubernetes.container.hash: 97b95acb,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:130abd87bb42310c29add618234ebe5da01c64b01b090edaca597a94f2328505,PodSandboxId:59877e8e5b82297030738d4f89ffa254222d757514d091d03cb20d65b13722ea,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694430698906891737,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc3c1e4e-9bec-426e-8070-6c2956a4e987,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f727c55f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7916aea0b44cd825176e82f3e9d27a8b73a15257b16a5e1ab6c1f861c4b7d,PodSandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694430690400471410,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394818a33e6e14567919a296677c87b7d0523138a5e1454ea296ee388ff66829,PodSandboxId:aa31c38c9c5b2a78469f5f30113d494f9ff7e4c25338ce4b4d5d120985f4a439,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694430681337670077,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-djfgh,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d29d7caf-f1ec-4ce7-b53e-f34d54576705,},Annotations:map[string]string{io.kubernetes.container.hash: f036d157,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cdeb432d94c95b20119545a8f8a70101fd7481a6ddaa05ce91b6d3d451566b0b,PodSandboxId:55e54c4564cb57dad6bc551c08c038d4208526366127b954680173fa4d246534,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672470627402,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4sdb9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9853694-4198-46cc-8659-9abc234e6b4e,},Annotations:map[string]string{io.kubernetes.container.hash: cf82230e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fd895fceff8f96c6ccb9ec6f764b3b056b6279c654fcb5140ce6c8c81e864d6,PodSandboxId:7213f0e2a5533a78c98c2132eb2922d11012049dea41c21464b4ec3ecf15a7c9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672294186839,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-926hq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 44d61526-bd9d-416a-a96d-48ec138f15f5,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9fbee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf379833a90d60d890053d954df61daa2c63eb4c19539682f776eee8632d3e13,PodSandboxId:0cde4a441610f6db93f16d3d039b4817feb97a5c11d97690fd0bc08fa48ee19c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694430661461015741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5ff5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5ac4fa-0a89-462f-b36f-73552e086263,},Annotations:map[string]string{io.kubernetes.container.hash: b822fea3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdef59503f7cf4f4639540782896
4f0acb95cf7412df0fd42d89fe20f4e9957b,PodSandboxId:b8a4d65b36e34d4748422f978951bd0f26ef2664beea29d86f3ccf41df3ea58b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694430660789100316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wb62q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce8ec91-13ca-4a85-89da-a205408e7d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5df220,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092d16c2a45f2c43ea18093c4ecc9506cddff013f129beebc7bd40a2e9b4f86b,Pod
SandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694430660007682147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e704de1c4376fb54a1140b97021211c974fbb162d24d1d6e89aa0c7001aa3036,PodSa
ndboxId:a035c1e15516a327a6e622de65304840683392169b7e601760d9f55a7c751691,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694430635492895656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5bc2a033b0924031bda4ef90b7f644b,},Annotations:map[string]string{io.kubernetes.container.hash: c99ccfd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb7fd9df36590a35b8e5446cba13efee6e3e2c6d62cc352eaa41b86ea1b0d1a,PodSandboxId:df8b48e71008f9d033df1ce3ab4aed79988fa4
15422d6e76726cc5d941a5ad2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694430635247491478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb5e9edefab89db41ab4cec2d01404a6d28870a2e62df2b9864c0a329926b07,PodSandboxId:77133b97584e9795d0b4343b99bd2b899be6a036bea7
dc765672b3b7ac6faa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694430634740991747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4aa5676666a253c05aaffbf48e4d4d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c86d5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da86dffd8ffa534ba72df99119308a97695e8c8e6a826962aecdd57e3d30b2da,PodSandboxId:fc945f99203e875de894f8f8ac18e8f2e5971a8a3d4cb96b03
f5f828df07d8f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694430634761542771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=297d5ecb-7325-493c-b52d-908cd8dde158 name=/runtime.v1alpha2.Runtim
eService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.387274513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=edd29873-f97d-4cae-bb5f-674fccbc6875 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.387429260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=edd29873-f97d-4cae-bb5f-674fccbc6875 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.387752056Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ca256ccbd277ec584578e5940aac169950e435b6067cbaa63e69ba1c68aa6e,PodSandboxId:d9e13cde300b49dfc4ce618f91b50d6d52954f82cc6867731a7db84d3fb41028,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430840217402916,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-q4czg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 262dfcb5-ae2a-4e49-b7c4-d339014404bf,},Annotations:map[string]string{io.kubernetes.container.hash: 97b95acb,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:130abd87bb42310c29add618234ebe5da01c64b01b090edaca597a94f2328505,PodSandboxId:59877e8e5b82297030738d4f89ffa254222d757514d091d03cb20d65b13722ea,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694430698906891737,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc3c1e4e-9bec-426e-8070-6c2956a4e987,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f727c55f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7916aea0b44cd825176e82f3e9d27a8b73a15257b16a5e1ab6c1f861c4b7d,PodSandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694430690400471410,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394818a33e6e14567919a296677c87b7d0523138a5e1454ea296ee388ff66829,PodSandboxId:aa31c38c9c5b2a78469f5f30113d494f9ff7e4c25338ce4b4d5d120985f4a439,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694430681337670077,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-djfgh,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d29d7caf-f1ec-4ce7-b53e-f34d54576705,},Annotations:map[string]string{io.kubernetes.container.hash: f036d157,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cdeb432d94c95b20119545a8f8a70101fd7481a6ddaa05ce91b6d3d451566b0b,PodSandboxId:55e54c4564cb57dad6bc551c08c038d4208526366127b954680173fa4d246534,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672470627402,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4sdb9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9853694-4198-46cc-8659-9abc234e6b4e,},Annotations:map[string]string{io.kubernetes.container.hash: cf82230e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fd895fceff8f96c6ccb9ec6f764b3b056b6279c654fcb5140ce6c8c81e864d6,PodSandboxId:7213f0e2a5533a78c98c2132eb2922d11012049dea41c21464b4ec3ecf15a7c9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672294186839,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-926hq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 44d61526-bd9d-416a-a96d-48ec138f15f5,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9fbee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf379833a90d60d890053d954df61daa2c63eb4c19539682f776eee8632d3e13,PodSandboxId:0cde4a441610f6db93f16d3d039b4817feb97a5c11d97690fd0bc08fa48ee19c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694430661461015741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5ff5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5ac4fa-0a89-462f-b36f-73552e086263,},Annotations:map[string]string{io.kubernetes.container.hash: b822fea3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdef59503f7cf4f4639540782896
4f0acb95cf7412df0fd42d89fe20f4e9957b,PodSandboxId:b8a4d65b36e34d4748422f978951bd0f26ef2664beea29d86f3ccf41df3ea58b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694430660789100316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wb62q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce8ec91-13ca-4a85-89da-a205408e7d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5df220,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092d16c2a45f2c43ea18093c4ecc9506cddff013f129beebc7bd40a2e9b4f86b,Pod
SandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694430660007682147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e704de1c4376fb54a1140b97021211c974fbb162d24d1d6e89aa0c7001aa3036,PodSa
ndboxId:a035c1e15516a327a6e622de65304840683392169b7e601760d9f55a7c751691,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694430635492895656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5bc2a033b0924031bda4ef90b7f644b,},Annotations:map[string]string{io.kubernetes.container.hash: c99ccfd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb7fd9df36590a35b8e5446cba13efee6e3e2c6d62cc352eaa41b86ea1b0d1a,PodSandboxId:df8b48e71008f9d033df1ce3ab4aed79988fa4
15422d6e76726cc5d941a5ad2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694430635247491478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb5e9edefab89db41ab4cec2d01404a6d28870a2e62df2b9864c0a329926b07,PodSandboxId:77133b97584e9795d0b4343b99bd2b899be6a036bea7
dc765672b3b7ac6faa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694430634740991747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4aa5676666a253c05aaffbf48e4d4d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c86d5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da86dffd8ffa534ba72df99119308a97695e8c8e6a826962aecdd57e3d30b2da,PodSandboxId:fc945f99203e875de894f8f8ac18e8f2e5971a8a3d4cb96b03
f5f828df07d8f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694430634761542771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=edd29873-f97d-4cae-bb5f-674fccbc6875 name=/runtime.v1alpha2.Runtim
eService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.425503005Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=9c191b2d-ec5e-429f-9249-c627821205cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.425596237Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=9c191b2d-ec5e-429f-9249-c627821205cd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.426608469Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ca256ccbd277ec584578e5940aac169950e435b6067cbaa63e69ba1c68aa6e,PodSandboxId:d9e13cde300b49dfc4ce618f91b50d6d52954f82cc6867731a7db84d3fb41028,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430840217402916,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-q4czg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 262dfcb5-ae2a-4e49-b7c4-d339014404bf,},Annotations:map[string]string{io.kubernetes.container.hash: 97b95acb,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:130abd87bb42310c29add618234ebe5da01c64b01b090edaca597a94f2328505,PodSandboxId:59877e8e5b82297030738d4f89ffa254222d757514d091d03cb20d65b13722ea,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694430698906891737,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc3c1e4e-9bec-426e-8070-6c2956a4e987,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f727c55f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7916aea0b44cd825176e82f3e9d27a8b73a15257b16a5e1ab6c1f861c4b7d,PodSandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694430690400471410,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394818a33e6e14567919a296677c87b7d0523138a5e1454ea296ee388ff66829,PodSandboxId:aa31c38c9c5b2a78469f5f30113d494f9ff7e4c25338ce4b4d5d120985f4a439,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694430681337670077,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-djfgh,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d29d7caf-f1ec-4ce7-b53e-f34d54576705,},Annotations:map[string]string{io.kubernetes.container.hash: f036d157,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cdeb432d94c95b20119545a8f8a70101fd7481a6ddaa05ce91b6d3d451566b0b,PodSandboxId:55e54c4564cb57dad6bc551c08c038d4208526366127b954680173fa4d246534,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672470627402,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4sdb9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9853694-4198-46cc-8659-9abc234e6b4e,},Annotations:map[string]string{io.kubernetes.container.hash: cf82230e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fd895fceff8f96c6ccb9ec6f764b3b056b6279c654fcb5140ce6c8c81e864d6,PodSandboxId:7213f0e2a5533a78c98c2132eb2922d11012049dea41c21464b4ec3ecf15a7c9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672294186839,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-926hq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 44d61526-bd9d-416a-a96d-48ec138f15f5,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9fbee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf379833a90d60d890053d954df61daa2c63eb4c19539682f776eee8632d3e13,PodSandboxId:0cde4a441610f6db93f16d3d039b4817feb97a5c11d97690fd0bc08fa48ee19c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694430661461015741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5ff5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5ac4fa-0a89-462f-b36f-73552e086263,},Annotations:map[string]string{io.kubernetes.container.hash: b822fea3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdef59503f7cf4f4639540782896
4f0acb95cf7412df0fd42d89fe20f4e9957b,PodSandboxId:b8a4d65b36e34d4748422f978951bd0f26ef2664beea29d86f3ccf41df3ea58b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694430660789100316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wb62q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce8ec91-13ca-4a85-89da-a205408e7d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5df220,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092d16c2a45f2c43ea18093c4ecc9506cddff013f129beebc7bd40a2e9b4f86b,Pod
SandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694430660007682147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e704de1c4376fb54a1140b97021211c974fbb162d24d1d6e89aa0c7001aa3036,PodSa
ndboxId:a035c1e15516a327a6e622de65304840683392169b7e601760d9f55a7c751691,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694430635492895656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5bc2a033b0924031bda4ef90b7f644b,},Annotations:map[string]string{io.kubernetes.container.hash: c99ccfd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb7fd9df36590a35b8e5446cba13efee6e3e2c6d62cc352eaa41b86ea1b0d1a,PodSandboxId:df8b48e71008f9d033df1ce3ab4aed79988fa4
15422d6e76726cc5d941a5ad2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694430635247491478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb5e9edefab89db41ab4cec2d01404a6d28870a2e62df2b9864c0a329926b07,PodSandboxId:77133b97584e9795d0b4343b99bd2b899be6a036bea7
dc765672b3b7ac6faa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694430634740991747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4aa5676666a253c05aaffbf48e4d4d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c86d5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da86dffd8ffa534ba72df99119308a97695e8c8e6a826962aecdd57e3d30b2da,PodSandboxId:fc945f99203e875de894f8f8ac18e8f2e5971a8a3d4cb96b03
f5f828df07d8f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694430634761542771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=9c191b2d-ec5e-429f-9249-c627821205cd name=/runtime.v1alpha2.Runtim
eService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.465451432Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6d51bdaf-1d4e-46dd-9b7d-77d3e61213ee name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.465549094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6d51bdaf-1d4e-46dd-9b7d-77d3e61213ee name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.465846680Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ca256ccbd277ec584578e5940aac169950e435b6067cbaa63e69ba1c68aa6e,PodSandboxId:d9e13cde300b49dfc4ce618f91b50d6d52954f82cc6867731a7db84d3fb41028,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430840217402916,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-q4czg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 262dfcb5-ae2a-4e49-b7c4-d339014404bf,},Annotations:map[string]string{io.kubernetes.container.hash: 97b95acb,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:130abd87bb42310c29add618234ebe5da01c64b01b090edaca597a94f2328505,PodSandboxId:59877e8e5b82297030738d4f89ffa254222d757514d091d03cb20d65b13722ea,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694430698906891737,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc3c1e4e-9bec-426e-8070-6c2956a4e987,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f727c55f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7916aea0b44cd825176e82f3e9d27a8b73a15257b16a5e1ab6c1f861c4b7d,PodSandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694430690400471410,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394818a33e6e14567919a296677c87b7d0523138a5e1454ea296ee388ff66829,PodSandboxId:aa31c38c9c5b2a78469f5f30113d494f9ff7e4c25338ce4b4d5d120985f4a439,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694430681337670077,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-djfgh,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d29d7caf-f1ec-4ce7-b53e-f34d54576705,},Annotations:map[string]string{io.kubernetes.container.hash: f036d157,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cdeb432d94c95b20119545a8f8a70101fd7481a6ddaa05ce91b6d3d451566b0b,PodSandboxId:55e54c4564cb57dad6bc551c08c038d4208526366127b954680173fa4d246534,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672470627402,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4sdb9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9853694-4198-46cc-8659-9abc234e6b4e,},Annotations:map[string]string{io.kubernetes.container.hash: cf82230e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fd895fceff8f96c6ccb9ec6f764b3b056b6279c654fcb5140ce6c8c81e864d6,PodSandboxId:7213f0e2a5533a78c98c2132eb2922d11012049dea41c21464b4ec3ecf15a7c9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672294186839,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-926hq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 44d61526-bd9d-416a-a96d-48ec138f15f5,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9fbee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf379833a90d60d890053d954df61daa2c63eb4c19539682f776eee8632d3e13,PodSandboxId:0cde4a441610f6db93f16d3d039b4817feb97a5c11d97690fd0bc08fa48ee19c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694430661461015741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5ff5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5ac4fa-0a89-462f-b36f-73552e086263,},Annotations:map[string]string{io.kubernetes.container.hash: b822fea3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdef59503f7cf4f4639540782896
4f0acb95cf7412df0fd42d89fe20f4e9957b,PodSandboxId:b8a4d65b36e34d4748422f978951bd0f26ef2664beea29d86f3ccf41df3ea58b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694430660789100316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wb62q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce8ec91-13ca-4a85-89da-a205408e7d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5df220,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092d16c2a45f2c43ea18093c4ecc9506cddff013f129beebc7bd40a2e9b4f86b,Pod
SandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694430660007682147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e704de1c4376fb54a1140b97021211c974fbb162d24d1d6e89aa0c7001aa3036,PodSa
ndboxId:a035c1e15516a327a6e622de65304840683392169b7e601760d9f55a7c751691,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694430635492895656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5bc2a033b0924031bda4ef90b7f644b,},Annotations:map[string]string{io.kubernetes.container.hash: c99ccfd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb7fd9df36590a35b8e5446cba13efee6e3e2c6d62cc352eaa41b86ea1b0d1a,PodSandboxId:df8b48e71008f9d033df1ce3ab4aed79988fa4
15422d6e76726cc5d941a5ad2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694430635247491478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb5e9edefab89db41ab4cec2d01404a6d28870a2e62df2b9864c0a329926b07,PodSandboxId:77133b97584e9795d0b4343b99bd2b899be6a036bea7
dc765672b3b7ac6faa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694430634740991747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4aa5676666a253c05aaffbf48e4d4d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c86d5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da86dffd8ffa534ba72df99119308a97695e8c8e6a826962aecdd57e3d30b2da,PodSandboxId:fc945f99203e875de894f8f8ac18e8f2e5971a8a3d4cb96b03
f5f828df07d8f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694430634761542771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6d51bdaf-1d4e-46dd-9b7d-77d3e61213ee name=/runtime.v1alpha2.Runtim
eService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.499227657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=372b2649-cfc7-479a-8004-584537d1e103 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.499398954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=372b2649-cfc7-479a-8004-584537d1e103 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.499670311Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ca256ccbd277ec584578e5940aac169950e435b6067cbaa63e69ba1c68aa6e,PodSandboxId:d9e13cde300b49dfc4ce618f91b50d6d52954f82cc6867731a7db84d3fb41028,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430840217402916,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-q4czg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 262dfcb5-ae2a-4e49-b7c4-d339014404bf,},Annotations:map[string]string{io.kubernetes.container.hash: 97b95acb,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:130abd87bb42310c29add618234ebe5da01c64b01b090edaca597a94f2328505,PodSandboxId:59877e8e5b82297030738d4f89ffa254222d757514d091d03cb20d65b13722ea,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694430698906891737,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc3c1e4e-9bec-426e-8070-6c2956a4e987,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f727c55f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7916aea0b44cd825176e82f3e9d27a8b73a15257b16a5e1ab6c1f861c4b7d,PodSandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694430690400471410,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394818a33e6e14567919a296677c87b7d0523138a5e1454ea296ee388ff66829,PodSandboxId:aa31c38c9c5b2a78469f5f30113d494f9ff7e4c25338ce4b4d5d120985f4a439,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694430681337670077,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-djfgh,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d29d7caf-f1ec-4ce7-b53e-f34d54576705,},Annotations:map[string]string{io.kubernetes.container.hash: f036d157,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cdeb432d94c95b20119545a8f8a70101fd7481a6ddaa05ce91b6d3d451566b0b,PodSandboxId:55e54c4564cb57dad6bc551c08c038d4208526366127b954680173fa4d246534,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672470627402,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4sdb9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9853694-4198-46cc-8659-9abc234e6b4e,},Annotations:map[string]string{io.kubernetes.container.hash: cf82230e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fd895fceff8f96c6ccb9ec6f764b3b056b6279c654fcb5140ce6c8c81e864d6,PodSandboxId:7213f0e2a5533a78c98c2132eb2922d11012049dea41c21464b4ec3ecf15a7c9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672294186839,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-926hq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 44d61526-bd9d-416a-a96d-48ec138f15f5,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9fbee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf379833a90d60d890053d954df61daa2c63eb4c19539682f776eee8632d3e13,PodSandboxId:0cde4a441610f6db93f16d3d039b4817feb97a5c11d97690fd0bc08fa48ee19c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694430661461015741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5ff5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5ac4fa-0a89-462f-b36f-73552e086263,},Annotations:map[string]string{io.kubernetes.container.hash: b822fea3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdef59503f7cf4f4639540782896
4f0acb95cf7412df0fd42d89fe20f4e9957b,PodSandboxId:b8a4d65b36e34d4748422f978951bd0f26ef2664beea29d86f3ccf41df3ea58b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694430660789100316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wb62q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce8ec91-13ca-4a85-89da-a205408e7d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5df220,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092d16c2a45f2c43ea18093c4ecc9506cddff013f129beebc7bd40a2e9b4f86b,Pod
SandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694430660007682147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e704de1c4376fb54a1140b97021211c974fbb162d24d1d6e89aa0c7001aa3036,PodSa
ndboxId:a035c1e15516a327a6e622de65304840683392169b7e601760d9f55a7c751691,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694430635492895656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5bc2a033b0924031bda4ef90b7f644b,},Annotations:map[string]string{io.kubernetes.container.hash: c99ccfd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb7fd9df36590a35b8e5446cba13efee6e3e2c6d62cc352eaa41b86ea1b0d1a,PodSandboxId:df8b48e71008f9d033df1ce3ab4aed79988fa4
15422d6e76726cc5d941a5ad2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694430635247491478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb5e9edefab89db41ab4cec2d01404a6d28870a2e62df2b9864c0a329926b07,PodSandboxId:77133b97584e9795d0b4343b99bd2b899be6a036bea7
dc765672b3b7ac6faa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694430634740991747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4aa5676666a253c05aaffbf48e4d4d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c86d5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da86dffd8ffa534ba72df99119308a97695e8c8e6a826962aecdd57e3d30b2da,PodSandboxId:fc945f99203e875de894f8f8ac18e8f2e5971a8a3d4cb96b03
f5f828df07d8f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694430634761542771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=372b2649-cfc7-479a-8004-584537d1e103 name=/runtime.v1alpha2.Runtim
eService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.533138922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=88134edf-900b-443d-af23-048b1f49b7b4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.533238923Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=88134edf-900b-443d-af23-048b1f49b7b4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:14:08 ingress-addon-legacy-508741 crio[720]: time="2023-09-11 11:14:08.533607748Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:d6ca256ccbd277ec584578e5940aac169950e435b6067cbaa63e69ba1c68aa6e,PodSandboxId:d9e13cde300b49dfc4ce618f91b50d6d52954f82cc6867731a7db84d3fb41028,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,Annotations:map[string]string{},},ImageRef:gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb,State:CONTAINER_RUNNING,CreatedAt:1694430840217402916,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5f5d8b66bb-q4czg,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 262dfcb5-ae2a-4e49-b7c4-d339014404bf,},Annotations:map[string]string{io.kubernetes.container.hash: 97b95acb,io.ku
bernetes.container.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:130abd87bb42310c29add618234ebe5da01c64b01b090edaca597a94f2328505,PodSandboxId:59877e8e5b82297030738d4f89ffa254222d757514d091d03cb20d65b13722ea,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,Annotations:map[string]string{},},ImageRef:docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70,State:CONTAINER_RUNNING,CreatedAt:1694430698906891737,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: cc3c1e4e-9bec-426e-8070-6c2956a4e987,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: f727c55f,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e7c7916aea0b44cd825176e82f3e9d27a8b73a15257b16a5e1ab6c1f861c4b7d,PodSandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694430690400471410,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes
.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:394818a33e6e14567919a296677c87b7d0523138a5e1454ea296ee388ff66829,PodSandboxId:aa31c38c9c5b2a78469f5f30113d494f9ff7e4c25338ce4b4d5d120985f4a439,Metadata:&ContainerMetadata{Name:controller,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,Annotations:map[string]string{},},ImageRef:registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324,State:CONTAINER_EXITED,CreatedAt:1694430681337670077,Labels:map[string]string{io.kubernetes.container.name: controller,io.kubernetes.pod.name: ingress-nginx-controller-7fcf777cb7-djfgh,io.
kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: d29d7caf-f1ec-4ce7-b53e-f34d54576705,},Annotations:map[string]string{io.kubernetes.container.hash: f036d157,io.kubernetes.container.ports: [{\"name\":\"http\",\"hostPort\":80,\"containerPort\":80,\"protocol\":\"TCP\"},{\"name\":\"https\",\"hostPort\":443,\"containerPort\":443,\"protocol\":\"TCP\"},{\"name\":\"webhook\",\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.preStopHandler: {\"exec\":{\"command\":[\"/wait-shutdown\"]}},io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 0,},},&Container{Id:cdeb432d94c95b20119545a8f8a70101fd7481a6ddaa05ce91b6d3d451566b0b,PodSandboxId:55e54c4564cb57dad6bc551c08c038d4208526366127b954680173fa4d246534,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea
58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672470627402,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-4sdb9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: f9853694-4198-46cc-8659-9abc234e6b4e,},Annotations:map[string]string{io.kubernetes.container.hash: cf82230e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0fd895fceff8f96c6ccb9ec6f764b3b056b6279c654fcb5140ce6c8c81e864d6,PodSandboxId:7213f0e2a5533a78c98c2132eb2922d11012049dea41c21464b4ec3ecf15a7c9,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:docker.io/jettech/kube-webhook-
certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,Annotations:map[string]string{},},ImageRef:docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6,State:CONTAINER_EXITED,CreatedAt:1694430672294186839,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-926hq,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 44d61526-bd9d-416a-a96d-48ec138f15f5,},Annotations:map[string]string{io.kubernetes.container.hash: 75c9fbee,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bf379833a90d60d890053d954df61daa2c63eb4c19539682f776eee8632d3e13,PodSandboxId:0cde4a441610f6db93f16d3d039b4817feb97a5c11d97690fd0bc08fa48ee19c,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{
Image:67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:2c8d61c46f484d881db43b34d13ca47a269336e576c81cf007ca740fa9ec0800,State:CONTAINER_RUNNING,CreatedAt:1694430661461015741,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-66bff467f8-5ff5m,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: eb5ac4fa-0a89-462f-b36f-73552e086263,},Annotations:map[string]string{io.kubernetes.container.hash: b822fea3,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cdef59503f7cf4f4639540782896
4f0acb95cf7412df0fd42d89fe20f4e9957b,PodSandboxId:b8a4d65b36e34d4748422f978951bd0f26ef2664beea29d86f3ccf41df3ea58b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:12f2b93c34db1caf73610092df74688e676c3b5abce940c25563ac5e93175381,State:CONTAINER_RUNNING,CreatedAt:1694430660789100316,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-wb62q,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3ce8ec91-13ca-4a85-89da-a205408e7d0b,},Annotations:map[string]string{io.kubernetes.container.hash: 8e5df220,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:092d16c2a45f2c43ea18093c4ecc9506cddff013f129beebc7bd40a2e9b4f86b,Pod
SandboxId:90023770a5539c5195b7f1b29199763f682650f3da8785394e0d1967cae9db6b,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694430660007682147,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1fef8f04-0694-4672-b7e2-b88e0cd2bdf6,},Annotations:map[string]string{io.kubernetes.container.hash: fda46a5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e704de1c4376fb54a1140b97021211c974fbb162d24d1d6e89aa0c7001aa3036,PodSa
ndboxId:a035c1e15516a327a6e622de65304840683392169b7e601760d9f55a7c751691,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:4198ba6f82f642dfd18ecf840ee37afb9df4b596f06eef20e44d0aec4ea27216,State:CONTAINER_RUNNING,CreatedAt:1694430635492895656,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a5bc2a033b0924031bda4ef90b7f644b,},Annotations:map[string]string{io.kubernetes.container.hash: c99ccfd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:edb7fd9df36590a35b8e5446cba13efee6e3e2c6d62cc352eaa41b86ea1b0d1a,PodSandboxId:df8b48e71008f9d033df1ce3ab4aed79988fa4
15422d6e76726cc5d941a5ad2f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:b1ae8022783f0bc6169330aa1927fff648ff81da74482f89da764cbb6be6a402,State:CONTAINER_RUNNING,CreatedAt:1694430635247491478,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d12e497b0008e22acbcd5a9cf2dd48ac,},Annotations:map[string]string{io.kubernetes.container.hash: ef5ef709,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0bb5e9edefab89db41ab4cec2d01404a6d28870a2e62df2b9864c0a329926b07,PodSandboxId:77133b97584e9795d0b4343b99bd2b899be6a036bea7
dc765672b3b7ac6faa70,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:391cf23f3094f59e1ce222cb1fd0593ee73e120d4fdeb29d563bd0432d2e7c32,State:CONTAINER_RUNNING,CreatedAt:1694430634740991747,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fb4aa5676666a253c05aaffbf48e4d4d,},Annotations:map[string]string{io.kubernetes.container.hash: 1c86d5c9,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:da86dffd8ffa534ba72df99119308a97695e8c8e6a826962aecdd57e3d30b2da,PodSandboxId:fc945f99203e875de894f8f8ac18e8f2e5971a8a3d4cb96b03
f5f828df07d8f8,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:1adc80c92cd714665cb8fe73a3157a44b050595a61d376bfd01ab2eb230230bd,State:CONTAINER_RUNNING,CreatedAt:1694430634761542771,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ingress-addon-legacy-508741,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b395a1e17534e69e27827b1f8d737725,},Annotations:map[string]string{io.kubernetes.container.hash: 345eaecd,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=88134edf-900b-443d-af23-048b1f49b7b4 name=/runtime.v1alpha2.Runtim
eService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	d6ca256ccbd27       gcr.io/google-samples/hello-app@sha256:9478d168fb78b0764dd7b3c147864c4da650ee456a1f21fc4d3fe2fbb20fe1fb            8 seconds ago       Running             hello-world-app           0                   d9e13cde300b4
	130abd87bb423       docker.io/library/nginx@sha256:16164a43b5faec40adb521e98272edc528e74f31c1352719132b8f7e53418d70                    2 minutes ago       Running             nginx                     0                   59877e8e5b822
	e7c7916aea0b4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   2 minutes ago       Running             storage-provisioner       1                   90023770a5539
	394818a33e6e1       registry.k8s.io/ingress-nginx/controller@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324   2 minutes ago       Exited              controller                0                   aa31c38c9c5b2
	cdeb432d94c95       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              patch                     0                   55e54c4564cb5
	0fd895fceff8f       docker.io/jettech/kube-webhook-certgen@sha256:784853e84a0223f34ea58fe36766c2dbeb129b125d25f16b8468c903262b77f6     2 minutes ago       Exited              create                    0                   7213f0e2a5533
	bf379833a90d6       67da37a9a360e600e74464da48437257b00a754c77c40f60c65e4cb327c34bd5                                                   3 minutes ago       Running             coredns                   0                   0cde4a441610f
	cdef59503f7cf       27f8b8d51985f755cfb3ffea424fa34865cc0da63e99378d8202f923c3c5a8ba                                                   3 minutes ago       Running             kube-proxy                0                   b8a4d65b36e34
	092d16c2a45f2       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                   3 minutes ago       Exited              storage-provisioner       0                   90023770a5539
	e704de1c4376f       303ce5db0e90dab1c5728ec70d21091201a23cdf8aeca70ab54943bbaaf0833f                                                   3 minutes ago       Running             etcd                      0                   a035c1e15516a
	edb7fd9df3659       a05a1a79adaad058478b7638d2e73cf408b283305081516fbe02706b0e205346                                                   3 minutes ago       Running             kube-scheduler            0                   df8b48e71008f
	da86dffd8ffa5       e7c545a60706cf009a893afdc7dba900cc2e342b8042b9c421d607ca41e8b290                                                   3 minutes ago       Running             kube-controller-manager   0                   fc945f99203e8
	0bb5e9edefab8       7d8d2960de69688eab5698081441539a1662f47e092488973e455a8334955cb1                                                   3 minutes ago       Running             kube-apiserver            0                   77133b97584e9
	
	* 
	* ==> coredns [bf379833a90d60d890053d954df61daa2c63eb4c19539682f776eee8632d3e13] <==
	* [INFO] 10.244.0.5:38686 - 29177 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000084492s
	[INFO] 10.244.0.5:57746 - 6610 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000132422s
	[INFO] 10.244.0.5:57746 - 50088 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00011977s
	[INFO] 10.244.0.5:38686 - 44965 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000058135s
	[INFO] 10.244.0.5:57746 - 48932 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00010974s
	[INFO] 10.244.0.5:38686 - 391 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000066004s
	[INFO] 10.244.0.5:57746 - 63005 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000067747s
	[INFO] 10.244.0.5:38686 - 55343 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046448s
	[INFO] 10.244.0.5:57746 - 46515 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000174686s
	[INFO] 10.244.0.5:38686 - 65323 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00003339s
	[INFO] 10.244.0.5:38686 - 59924 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000078235s
	[INFO] 10.244.0.5:43621 - 24419 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000117592s
	[INFO] 10.244.0.5:46784 - 47953 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000042928s
	[INFO] 10.244.0.5:46784 - 16477 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000062491s
	[INFO] 10.244.0.5:46784 - 45663 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000093826s
	[INFO] 10.244.0.5:43621 - 11636 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000064457s
	[INFO] 10.244.0.5:46784 - 17201 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000037023s
	[INFO] 10.244.0.5:43621 - 38802 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000077468s
	[INFO] 10.244.0.5:46784 - 14063 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000082445s
	[INFO] 10.244.0.5:43621 - 59125 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00007859s
	[INFO] 10.244.0.5:46784 - 45155 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060598s
	[INFO] 10.244.0.5:43621 - 40242 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000054402s
	[INFO] 10.244.0.5:46784 - 18345 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000065183s
	[INFO] 10.244.0.5:43621 - 64626 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.00006726s
	[INFO] 10.244.0.5:43621 - 53489 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000066305s
	
	* 
	* ==> describe nodes <==
	* Name:               ingress-addon-legacy-508741
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ingress-addon-legacy-508741
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=ingress-addon-legacy-508741
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_10_43_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:10:39 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ingress-addon-legacy-508741
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:14:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:11:43 +0000   Mon, 11 Sep 2023 11:10:36 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:11:43 +0000   Mon, 11 Sep 2023 11:10:36 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:11:43 +0000   Mon, 11 Sep 2023 11:10:36 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:11:43 +0000   Mon, 11 Sep 2023 11:10:53 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.127
	  Hostname:    ingress-addon-legacy-508741
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             4012800Ki
	  pods:               110
	System Info:
	  Machine ID:                 bdc5dfeb33334df4976ce20d8104073d
	  System UUID:                bdc5dfeb-3333-4df4-976c-e20d8104073d
	  Boot ID:                    285dc0df-1423-4d05-8bdb-0ee670e8edd0
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.18.20
	  Kube-Proxy Version:         v1.18.20
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                   CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                   ----                                                   ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5f5d8b66bb-q4czg                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12s
	  default                     nginx                                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m34s
	  kube-system                 coredns-66bff467f8-5ff5m                               100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     3m10s
	  kube-system                 etcd-ingress-addon-legacy-508741                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 kube-apiserver-ingress-addon-legacy-508741             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 kube-controller-manager-ingress-addon-legacy-508741    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 kube-proxy-wb62q                                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 kube-scheduler-ingress-addon-legacy-508741             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m25s
	  kube-system                 storage-provisioner                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                650m (32%!)(MISSING)  0 (0%!)(MISSING)
	  memory             70Mi (1%!)(MISSING)   170Mi (4%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From        Message
	  ----    ------                   ----                   ----        -------
	  Normal  NodeHasSufficientMemory  3m35s (x5 over 3m35s)  kubelet     Node ingress-addon-legacy-508741 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m35s (x5 over 3m35s)  kubelet     Node ingress-addon-legacy-508741 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m35s (x5 over 3m35s)  kubelet     Node ingress-addon-legacy-508741 status is now: NodeHasSufficientPID
	  Normal  Starting                 3m25s                  kubelet     Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m25s                  kubelet     Node ingress-addon-legacy-508741 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m25s                  kubelet     Node ingress-addon-legacy-508741 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m25s                  kubelet     Node ingress-addon-legacy-508741 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m25s                  kubelet     Updated Node Allocatable limit across pods
	  Normal  NodeReady                3m15s                  kubelet     Node ingress-addon-legacy-508741 status is now: NodeReady
	  Normal  Starting                 3m8s                   kube-proxy  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep11 11:09] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.138250] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Sep11 11:10] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.738977] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.157790] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.103443] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000020] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000007] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.628577] systemd-fstab-generator[647]: Ignoring "noauto" for root device
	[  +0.107398] systemd-fstab-generator[658]: Ignoring "noauto" for root device
	[  +0.150054] systemd-fstab-generator[671]: Ignoring "noauto" for root device
	[  +0.112005] systemd-fstab-generator[682]: Ignoring "noauto" for root device
	[  +0.227264] systemd-fstab-generator[706]: Ignoring "noauto" for root device
	[  +8.663484] systemd-fstab-generator[1037]: Ignoring "noauto" for root device
	[  +3.389885] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[  +9.477493] systemd-fstab-generator[1439]: Ignoring "noauto" for root device
	[ +18.213639] kauditd_printk_skb: 6 callbacks suppressed
	[Sep11 11:11] kauditd_printk_skb: 11 callbacks suppressed
	[  +6.259791] kauditd_printk_skb: 6 callbacks suppressed
	[ +21.618916] kauditd_printk_skb: 7 callbacks suppressed
	[  +5.211737] kauditd_printk_skb: 3 callbacks suppressed
	[Sep11 11:14] kauditd_printk_skb: 5 callbacks suppressed
	
	* 
	* ==> etcd [e704de1c4376fb54a1140b97021211c974fbb162d24d1d6e89aa0c7001aa3036] <==
	* raft2023/09/11 11:10:36 INFO: 9dc5e8b969e9632c switched to configuration voters=(11368748717410181932)
	2023-09-11 11:10:36.310199 W | auth: simple token is not cryptographically signed
	2023-09-11 11:10:36.314031 I | etcdserver: starting server... [version: 3.4.3, cluster version: to_be_decided]
	2023-09-11 11:10:36.321640 I | embed: ClientTLS: cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = 
	2023-09-11 11:10:36.321821 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-11 11:10:36.321952 I | embed: listening for peers on 192.168.39.127:2380
	2023-09-11 11:10:36.322148 I | etcdserver: 9dc5e8b969e9632c as single-node; fast-forwarding 9 ticks (election ticks 10)
	raft2023/09/11 11:10:36 INFO: 9dc5e8b969e9632c switched to configuration voters=(11368748717410181932)
	2023-09-11 11:10:36.322716 I | etcdserver/membership: added member 9dc5e8b969e9632c [https://192.168.39.127:2380] to cluster 367c7cb0db09c3ab
	raft2023/09/11 11:10:36 INFO: 9dc5e8b969e9632c is starting a new election at term 1
	raft2023/09/11 11:10:36 INFO: 9dc5e8b969e9632c became candidate at term 2
	raft2023/09/11 11:10:36 INFO: 9dc5e8b969e9632c received MsgVoteResp from 9dc5e8b969e9632c at term 2
	raft2023/09/11 11:10:36 INFO: 9dc5e8b969e9632c became leader at term 2
	raft2023/09/11 11:10:36 INFO: raft.node: 9dc5e8b969e9632c elected leader 9dc5e8b969e9632c at term 2
	2023-09-11 11:10:36.602288 I | etcdserver: published {Name:ingress-addon-legacy-508741 ClientURLs:[https://192.168.39.127:2379]} to cluster 367c7cb0db09c3ab
	2023-09-11 11:10:36.602412 I | embed: ready to serve client requests
	2023-09-11 11:10:36.602735 I | embed: ready to serve client requests
	2023-09-11 11:10:36.604082 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-11 11:10:36.606642 I | embed: serving client requests on 192.168.39.127:2379
	2023-09-11 11:10:36.606903 I | etcdserver: setting up the initial cluster version to 3.4
	2023-09-11 11:10:36.618622 N | etcdserver/membership: set the initial cluster version to 3.4
	2023-09-11 11:10:36.618722 I | etcdserver/api: enabled capabilities for version 3.4
	2023-09-11 11:10:58.079759 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/kube-system/certificate-controller\" " with result "range_response_count:1 size:263" took too long (448.576596ms) to execute
	2023-09-11 11:10:58.080060 W | etcdserver: read-only range request "key:\"/registry/serviceaccounts/default/default\" " with result "range_response_count:0 size:5" took too long (306.547783ms) to execute
	2023-09-11 11:11:45.089730 W | etcdserver: read-only range request "key:\"/registry/pods/default/\" range_end:\"/registry/pods/default0\" " with result "range_response_count:1 size:2214" took too long (282.011748ms) to execute
	
	* 
	* ==> kernel <==
	*  11:14:08 up 4 min,  0 users,  load average: 1.08, 0.53, 0.23
	Linux ingress-addon-legacy-508741 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [0bb5e9edefab89db41ab4cec2d01404a6d28870a2e62df2b9864c0a329926b07] <==
	* I0911 11:10:39.812661       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:10:39.812774       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 11:10:40.602976       1 controller.go:130] OpenAPI AggregationController: action for item : Nothing (removed from the queue).
	I0911 11:10:40.603026       1 controller.go:130] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0911 11:10:40.610970       1 storage_scheduling.go:134] created PriorityClass system-node-critical with value 2000001000
	I0911 11:10:40.615770       1 storage_scheduling.go:134] created PriorityClass system-cluster-critical with value 2000000000
	I0911 11:10:40.615842       1 storage_scheduling.go:143] all system priority classes are created successfully or already exist.
	I0911 11:10:41.184645       1 controller.go:609] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 11:10:41.258052       1 controller.go:609] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	W0911 11:10:41.373096       1 lease.go:224] Resetting endpoints for master service "kubernetes" to [192.168.39.127]
	I0911 11:10:41.374077       1 controller.go:609] quota admission added evaluator for: endpoints
	I0911 11:10:41.379006       1 controller.go:609] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 11:10:41.970520       1 controller.go:609] quota admission added evaluator for: serviceaccounts
	I0911 11:10:42.944826       1 controller.go:609] quota admission added evaluator for: deployments.apps
	I0911 11:10:43.061108       1 controller.go:609] quota admission added evaluator for: daemonsets.apps
	I0911 11:10:43.410003       1 controller.go:609] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:10:58.080536       1 trace.go:116] Trace[897711676]: "GuaranteedUpdate etcd3" type:*core.ServiceAccount (started: 2023-09-11 11:10:57.529652315 +0000 UTC m=+22.595422457) (total time: 550.858279ms):
	Trace[897711676]: [550.839572ms] [550.610836ms] Transaction committed
	I0911 11:10:58.080682       1 trace.go:116] Trace[1291316632]: "Update" url:/api/v1/namespaces/kube-system/serviceaccounts/deployment-controller,user-agent:kube-controller-manager/v1.18.20 (linux/amd64) kubernetes/1f3e19b/tokens-controller,client:192.168.39.127 (started: 2023-09-11 11:10:57.529529967 +0000 UTC m=+22.595300104) (total time: 551.12326ms):
	Trace[1291316632]: [551.083732ms] [550.998967ms] Object stored in database
	I0911 11:10:58.188443       1 controller.go:609] quota admission added evaluator for: replicasets.apps
	I0911 11:10:58.459055       1 controller.go:609] quota admission added evaluator for: controllerrevisions.apps
	I0911 11:11:08.514679       1 controller.go:609] quota admission added evaluator for: jobs.batch
	I0911 11:11:34.475454       1 controller.go:609] quota admission added evaluator for: ingresses.networking.k8s.io
	E0911 11:14:01.070638       1 authentication.go:53] Unable to authenticate the request due to an error: [invalid bearer token, Token has been invalidated]
	
	* 
	* ==> kube-controller-manager [da86dffd8ffa534ba72df99119308a97695e8c8e6a826962aecdd57e3d30b2da] <==
	* I0911 11:10:58.576470       1 node_lifecycle_controller.go:1433] Initializing eviction metric for zone: 
	W0911 11:10:58.576562       1 node_lifecycle_controller.go:1048] Missing timestamp for Node ingress-addon-legacy-508741. Assuming now as a timestamp.
	I0911 11:10:58.576605       1 node_lifecycle_controller.go:1249] Controller detected that zone  is now in state Normal.
	I0911 11:10:58.576639       1 event.go:278] Event(v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ingress-addon-legacy-508741", UID:"af95a0ad-a1e5-4543-91e6-137f14044833", APIVersion:"v1", ResourceVersion:"", FieldPath:""}): type: 'Normal' reason: 'RegisteredNode' Node ingress-addon-legacy-508741 event: Registered Node ingress-addon-legacy-508741 in Controller
	I0911 11:10:58.576672       1 taint_manager.go:187] Starting NoExecuteTaintManager
	I0911 11:10:58.591080       1 shared_informer.go:230] Caches are synced for ClusterRoleAggregator 
	I0911 11:10:58.733043       1 shared_informer.go:230] Caches are synced for persistent volume 
	I0911 11:10:58.745877       1 shared_informer.go:230] Caches are synced for garbage collector 
	I0911 11:10:58.745969       1 garbagecollector.go:142] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0911 11:10:58.787178       1 shared_informer.go:230] Caches are synced for resource quota 
	I0911 11:10:58.795548       1 shared_informer.go:230] Caches are synced for resource quota 
	I0911 11:10:58.820020       1 shared_informer.go:230] Caches are synced for garbage collector 
	E0911 11:10:58.893162       1 daemon_controller.go:321] kube-system/kube-proxy failed with : error storing status for daemon set &v1.DaemonSet{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-proxy", GenerateName:"", Namespace:"kube-system", SelfLink:"/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy", UID:"5beac55f-53b2-4a16-84d6-2c1ffbedb99a", ResourceVersion:"346", Generation:1, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:63830027443, loc:(*time.Location)(0x6d002e0)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string{"deprecated.daemonset.template.generation":"1"}, OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry{v1.ManagedFieldsEntry{Manager:"kubeadm", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001280b20), FieldsType:"FieldsV1", FieldsV1:(*v1.Fields
V1)(0xc001280b40)}, v1.ManagedFieldsEntry{Manager:"kube-controller-manager", Operation:"Update", APIVersion:"apps/v1", Time:(*v1.Time)(0xc001280b60), FieldsType:"FieldsV1", FieldsV1:(*v1.FieldsV1)(0xc001280b80)}}}, Spec:v1.DaemonSetSpec{Selector:(*v1.LabelSelector)(0xc001280ba0), Template:v1.PodTemplateSpec{ObjectMeta:v1.ObjectMeta{Name:"", GenerateName:"", Namespace:"", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-proxy"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v1.PodSpec{Volumes:[]v1.Volume{v1.Volume{Name:"kube-proxy", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(nil), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDi
skVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(0xc001491500), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.Sca
leIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"xtables-lock", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001280bc0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtu
alDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}, v1.Volume{Name:"lib-modules", VolumeSource:v1.VolumeSource{HostPath:(*v1.HostPathVolumeSource)(0xc001280be0), EmptyDir:(*v1.EmptyDirVolumeSource)(nil), GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), GitRepo:(*v1.GitRepoVolumeSource)(nil), Secret:(*v1.SecretVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), ISCSI:(*v1.ISCSIVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), PersistentVolumeClaim:(*v1.PersistentVolumeClaimVolumeSource)(nil), RBD:(*v1.RBDVolumeSource)(nil), FlexVolume:(*v1.FlexVolumeSource)(nil), Cinder:(*v1.CinderVolumeSource
)(nil), CephFS:(*v1.CephFSVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), DownwardAPI:(*v1.DownwardAPIVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), AzureFile:(*v1.AzureFileVolumeSource)(nil), ConfigMap:(*v1.ConfigMapVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), Projected:(*v1.ProjectedVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOVolumeSource)(nil), StorageOS:(*v1.StorageOSVolumeSource)(nil), CSI:(*v1.CSIVolumeSource)(nil)}}}, InitContainers:[]v1.Container(nil), Containers:[]v1.Container{v1.Container{Name:"kube-proxy", Image:"k8s.gcr.io/kube-proxy:v1.18.20", Command:[]string{"/usr/local/bin/kube-proxy", "--config=/var/lib/kube-proxy/config.conf", "--hostname-override=$(NODE_NAME)"}, Args:[]string(nil), WorkingDir:"", Ports:[]v1.ContainerPort(nil), EnvFrom:[]v1.EnvFromSource(
nil), Env:[]v1.EnvVar{v1.EnvVar{Name:"NODE_NAME", Value:"", ValueFrom:(*v1.EnvVarSource)(0xc001280c20)}}, Resources:v1.ResourceRequirements{Limits:v1.ResourceList(nil), Requests:v1.ResourceList(nil)}, VolumeMounts:[]v1.VolumeMount{v1.VolumeMount{Name:"kube-proxy", ReadOnly:false, MountPath:"/var/lib/kube-proxy", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"xtables-lock", ReadOnly:false, MountPath:"/run/xtables.lock", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}, v1.VolumeMount{Name:"lib-modules", ReadOnly:true, MountPath:"/lib/modules", SubPath:"", MountPropagation:(*v1.MountPropagationMode)(nil), SubPathExpr:""}}, VolumeDevices:[]v1.VolumeDevice(nil), LivenessProbe:(*v1.Probe)(nil), ReadinessProbe:(*v1.Probe)(nil), StartupProbe:(*v1.Probe)(nil), Lifecycle:(*v1.Lifecycle)(nil), TerminationMessagePath:"/dev/termination-log", TerminationMessagePolicy:"File", ImagePullPolicy:"IfNotPresent", SecurityContext:(*v1.SecurityContex
t)(0xc0012997c0), Stdin:false, StdinOnce:false, TTY:false}}, EphemeralContainers:[]v1.EphemeralContainer(nil), RestartPolicy:"Always", TerminationGracePeriodSeconds:(*int64)(0xc0015049c8), ActiveDeadlineSeconds:(*int64)(nil), DNSPolicy:"ClusterFirst", NodeSelector:map[string]string{"kubernetes.io/os":"linux"}, ServiceAccountName:"kube-proxy", DeprecatedServiceAccount:"kube-proxy", AutomountServiceAccountToken:(*bool)(nil), NodeName:"", HostNetwork:true, HostPID:false, HostIPC:false, ShareProcessNamespace:(*bool)(nil), SecurityContext:(*v1.PodSecurityContext)(0xc000302070), ImagePullSecrets:[]v1.LocalObjectReference(nil), Hostname:"", Subdomain:"", Affinity:(*v1.Affinity)(nil), SchedulerName:"default-scheduler", Tolerations:[]v1.Toleration{v1.Toleration{Key:"CriticalAddonsOnly", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}, v1.Toleration{Key:"", Operator:"Exists", Value:"", Effect:"", TolerationSeconds:(*int64)(nil)}}, HostAliases:[]v1.HostAlias(nil), PriorityClassName:"system-node-
critical", Priority:(*int32)(nil), DNSConfig:(*v1.PodDNSConfig)(nil), ReadinessGates:[]v1.PodReadinessGate(nil), RuntimeClassName:(*string)(nil), EnableServiceLinks:(*bool)(nil), PreemptionPolicy:(*v1.PreemptionPolicy)(nil), Overhead:v1.ResourceList(nil), TopologySpreadConstraints:[]v1.TopologySpreadConstraint(nil)}}, UpdateStrategy:v1.DaemonSetUpdateStrategy{Type:"RollingUpdate", RollingUpdate:(*v1.RollingUpdateDaemonSet)(0xc00000e7f8)}, MinReadySeconds:0, RevisionHistoryLimit:(*int32)(0xc001504a18)}, Status:v1.DaemonSetStatus{CurrentNumberScheduled:0, NumberMisscheduled:0, DesiredNumberScheduled:1, NumberReady:0, ObservedGeneration:1, UpdatedNumberScheduled:0, NumberAvailable:0, NumberUnavailable:1, CollisionCount:(*int32)(nil), Conditions:[]v1.DaemonSetCondition(nil)}}: Operation cannot be fulfilled on daemonsets.apps "kube-proxy": the object has been modified; please apply your changes to the latest version and try again
	E0911 11:10:58.966277       1 clusterroleaggregation_controller.go:181] view failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "view": the object has been modified; please apply your changes to the latest version and try again
	E0911 11:10:58.966455       1 clusterroleaggregation_controller.go:181] admin failed with : Operation cannot be fulfilled on clusterroles.rbac.authorization.k8s.io "admin": the object has been modified; please apply your changes to the latest version and try again
	I0911 11:11:08.498868       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"ingress-nginx", Name:"ingress-nginx-controller", UID:"ba640323-a5de-439a-9ac8-50b375abac13", APIVersion:"apps/v1", ResourceVersion:"443", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set ingress-nginx-controller-7fcf777cb7 to 1
	I0911 11:11:08.540136       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b8565416-db0c-4742-8827-4abb89905c4f", APIVersion:"batch/v1", ResourceVersion:"447", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-create-926hq
	I0911 11:11:08.581943       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7", UID:"fdf2abc8-753f-4a73-b485-7841b7083de4", APIVersion:"apps/v1", ResourceVersion:"444", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-controller-7fcf777cb7-djfgh
	I0911 11:11:08.614084       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"313fa2e5-6c99-495b-9c22-c15be1b36694", APIVersion:"batch/v1", ResourceVersion:"453", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: ingress-nginx-admission-patch-4sdb9
	I0911 11:11:12.614247       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-create", UID:"b8565416-db0c-4742-8827-4abb89905c4f", APIVersion:"batch/v1", ResourceVersion:"455", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0911 11:11:13.655679       1 event.go:278] Event(v1.ObjectReference{Kind:"Job", Namespace:"ingress-nginx", Name:"ingress-nginx-admission-patch", UID:"313fa2e5-6c99-495b-9c22-c15be1b36694", APIVersion:"batch/v1", ResourceVersion:"465", FieldPath:""}): type: 'Normal' reason: 'Completed' Job completed
	I0911 11:13:56.884714       1 event.go:278] Event(v1.ObjectReference{Kind:"Deployment", Namespace:"default", Name:"hello-world-app", UID:"599582c6-8bfd-4b1b-8c9e-96a7864a204a", APIVersion:"apps/v1", ResourceVersion:"663", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set hello-world-app-5f5d8b66bb to 1
	I0911 11:13:56.922430       1 event.go:278] Event(v1.ObjectReference{Kind:"ReplicaSet", Namespace:"default", Name:"hello-world-app-5f5d8b66bb", UID:"359ff66f-3aa1-4fcc-b503-58e27247e202", APIVersion:"apps/v1", ResourceVersion:"664", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: hello-world-app-5f5d8b66bb-q4czg
	E0911 11:14:05.673628       1 tokens_controller.go:261] error synchronizing serviceaccount ingress-nginx/default: secrets "default-token-sqqpw" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated
	
	* 
	* ==> kube-proxy [cdef59503f7cf4f46395407828964f0acb95cf7412df0fd42d89fe20f4e9957b] <==
	* W0911 11:11:00.964206       1 server_others.go:559] Unknown proxy mode "", assuming iptables proxy
	I0911 11:11:00.974774       1 node.go:136] Successfully retrieved node IP: 192.168.39.127
	I0911 11:11:00.974861       1 server_others.go:186] Using iptables Proxier.
	I0911 11:11:00.975439       1 server.go:583] Version: v1.18.20
	I0911 11:11:00.982808       1 config.go:133] Starting endpoints config controller
	I0911 11:11:00.982883       1 shared_informer.go:223] Waiting for caches to sync for endpoints config
	I0911 11:11:00.982925       1 config.go:315] Starting service config controller
	I0911 11:11:00.982945       1 shared_informer.go:223] Waiting for caches to sync for service config
	I0911 11:11:01.083215       1 shared_informer.go:230] Caches are synced for endpoints config 
	I0911 11:11:01.083432       1 shared_informer.go:230] Caches are synced for service config 
	
	* 
	* ==> kube-scheduler [edb7fd9df36590a35b8e5446cba13efee6e3e2c6d62cc352eaa41b86ea1b0d1a] <==
	* I0911 11:10:39.714988       1 registry.go:150] Registering EvenPodsSpread predicate and priority function
	I0911 11:10:39.721392       1 configmap_cafile_content.go:202] Starting client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:10:39.721714       1 shared_informer.go:223] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:10:39.724056       1 secure_serving.go:178] Serving securely on 127.0.0.1:10259
	I0911 11:10:39.726537       1 tlsconfig.go:240] Starting DynamicServingCertificateController
	E0911 11:10:39.731437       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 11:10:39.731764       1 reflector.go:178] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:233: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0911 11:10:39.735380       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 11:10:39.735619       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 11:10:39.735828       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 11:10:39.735874       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 11:10:39.736034       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 11:10:39.736844       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 11:10:39.737620       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 11:10:39.737982       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 11:10:39.739148       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 11:10:39.739246       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 11:10:40.680733       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 11:10:40.741203       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 11:10:40.862700       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 11:10:40.888957       1 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 11:10:41.013591       1 reflector.go:178] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0911 11:10:43.522576       1 shared_informer.go:230] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file 
	E0911 11:10:58.306747       1 factory.go:503] pod: kube-system/coredns-66bff467f8-j7hds is already present in the active queue
	E0911 11:10:58.318992       1 factory.go:503] pod: kube-system/coredns-66bff467f8-5ff5m is already present in the active queue
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 11:10:07 UTC, ends at Mon 2023-09-11 11:14:09 UTC. --
	Sep 11 11:11:30 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:11:30.369876    1446 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 092d16c2a45f2c43ea18093c4ecc9506cddff013f129beebc7bd40a2e9b4f86b
	Sep 11 11:11:34 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:11:34.667520    1446 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 11 11:11:34 ingress-addon-legacy-508741 kubelet[1446]: E0911 11:11:34.676434    1446 reflector.go:178] object-"default"/"default-token-9p9dg": Failed to list *v1.Secret: secrets "default-token-9p9dg" is forbidden: User "system:node:ingress-addon-legacy-508741" cannot list resource "secrets" in API group "" in the namespace "default": no relationship found between node "ingress-addon-legacy-508741" and this object
	Sep 11 11:11:34 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:11:34.739129    1446 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9p9dg" (UniqueName: "kubernetes.io/secret/cc3c1e4e-9bec-426e-8070-6c2956a4e987-default-token-9p9dg") pod "nginx" (UID: "cc3c1e4e-9bec-426e-8070-6c2956a4e987")
	Sep 11 11:11:35 ingress-addon-legacy-508741 kubelet[1446]: E0911 11:11:35.840007    1446 secret.go:195] Couldn't get secret default/default-token-9p9dg: failed to sync secret cache: timed out waiting for the condition
	Sep 11 11:11:35 ingress-addon-legacy-508741 kubelet[1446]: E0911 11:11:35.840521    1446 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/cc3c1e4e-9bec-426e-8070-6c2956a4e987-default-token-9p9dg podName:cc3c1e4e-9bec-426e-8070-6c2956a4e987 nodeName:}" failed. No retries permitted until 2023-09-11 11:11:36.340475821 +0000 UTC m=+53.443558011 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"default-token-9p9dg\" (UniqueName: \"kubernetes.io/secret/cc3c1e4e-9bec-426e-8070-6c2956a4e987-default-token-9p9dg\") pod \"nginx\" (UID: \"cc3c1e4e-9bec-426e-8070-6c2956a4e987\") : failed to sync secret cache: timed out waiting for the condition"
	Sep 11 11:13:56 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:13:56.933721    1446 topology_manager.go:235] [topologymanager] Topology Admit Handler
	Sep 11 11:13:56 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:13:56.957738    1446 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "default-token-9p9dg" (UniqueName: "kubernetes.io/secret/262dfcb5-ae2a-4e49-b7c4-d339014404bf-default-token-9p9dg") pod "hello-world-app-5f5d8b66bb-q4czg" (UID: "262dfcb5-ae2a-4e49-b7c4-d339014404bf")
	Sep 11 11:13:58 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:13:58.923953    1446 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 52bcf155f501d7fe6fbd01a8dd5a048493ecbb202b674a55eefeca0a87f85a2e
	Sep 11 11:13:59 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:13:59.064933    1446 reconciler.go:196] operationExecutor.UnmountVolume started for volume "minikube-ingress-dns-token-nhkbj" (UniqueName: "kubernetes.io/secret/b5add997-8589-4c4b-90e3-9d0ba9e85002-minikube-ingress-dns-token-nhkbj") pod "b5add997-8589-4c4b-90e3-9d0ba9e85002" (UID: "b5add997-8589-4c4b-90e3-9d0ba9e85002")
	Sep 11 11:13:59 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:13:59.086672    1446 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b5add997-8589-4c4b-90e3-9d0ba9e85002-minikube-ingress-dns-token-nhkbj" (OuterVolumeSpecName: "minikube-ingress-dns-token-nhkbj") pod "b5add997-8589-4c4b-90e3-9d0ba9e85002" (UID: "b5add997-8589-4c4b-90e3-9d0ba9e85002"). InnerVolumeSpecName "minikube-ingress-dns-token-nhkbj". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:13:59 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:13:59.153078    1446 topology_manager.go:221] [topologymanager] RemoveContainer - Container ID: 52bcf155f501d7fe6fbd01a8dd5a048493ecbb202b674a55eefeca0a87f85a2e
	Sep 11 11:13:59 ingress-addon-legacy-508741 kubelet[1446]: E0911 11:13:59.153711    1446 remote_runtime.go:295] ContainerStatus "52bcf155f501d7fe6fbd01a8dd5a048493ecbb202b674a55eefeca0a87f85a2e" from runtime service failed: rpc error: code = NotFound desc = could not find container "52bcf155f501d7fe6fbd01a8dd5a048493ecbb202b674a55eefeca0a87f85a2e": container with ID starting with 52bcf155f501d7fe6fbd01a8dd5a048493ecbb202b674a55eefeca0a87f85a2e not found: ID does not exist
	Sep 11 11:13:59 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:13:59.165399    1446 reconciler.go:319] Volume detached for volume "minikube-ingress-dns-token-nhkbj" (UniqueName: "kubernetes.io/secret/b5add997-8589-4c4b-90e3-9d0ba9e85002-minikube-ingress-dns-token-nhkbj") on node "ingress-addon-legacy-508741" DevicePath ""
	Sep 11 11:13:59 ingress-addon-legacy-508741 kubelet[1446]: E0911 11:13:59.484873    1446 kubelet_pods.go:1235] Failed killing the pod "kube-ingress-dns-minikube": failed to "KillContainer" for "minikube-ingress-dns" with KillContainerError: "rpc error: code = NotFound desc = could not find container \"52bcf155f501d7fe6fbd01a8dd5a048493ecbb202b674a55eefeca0a87f85a2e\": container with ID starting with 52bcf155f501d7fe6fbd01a8dd5a048493ecbb202b674a55eefeca0a87f85a2e not found: ID does not exist"
	Sep 11 11:14:01 ingress-addon-legacy-508741 kubelet[1446]: E0911 11:14:01.048272    1446 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-djfgh.1783d3df58e1cb7e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-djfgh", UID:"d29d7caf-f1ec-4ce7-b53e-f34d54576705", APIVersion:"v1", ResourceVersion:"459", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-508741"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc137db7e429c517e, ext:198146881058, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc137db7e429c517e, ext:198146881058, loc:(*time.Location)(0x701e5a0)}}, Count:1, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-djfgh.1783d3df58e1cb7e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 11 11:14:01 ingress-addon-legacy-508741 kubelet[1446]: E0911 11:14:01.062132    1446 event.go:260] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ingress-nginx-controller-7fcf777cb7-djfgh.1783d3df58e1cb7e", GenerateName:"", Namespace:"ingress-nginx", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:v1.Time{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, DeletionTimestamp:(*v1.Time)(nil), DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ClusterName:"", ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"ingress-nginx", Name:"ingress-nginx-controller-7fcf777cb7-djfgh", UID:"d29d7caf-f1ec-4ce7-b53e-f34d54576705", APIVersion:"v1", ResourceVersion:"459", FieldPath:"spec.containers{controller}"}, Reason:"Killing", Message:"Stoppi
ng container controller", Source:v1.EventSource{Component:"kubelet", Host:"ingress-addon-legacy-508741"}, FirstTimestamp:v1.Time{Time:time.Time{wall:0xc137db7e429c517e, ext:198146881058, loc:(*time.Location)(0x701e5a0)}}, LastTimestamp:v1.Time{Time:time.Time{wall:0xc137db7e4367c575, ext:198160214555, loc:(*time.Location)(0x701e5a0)}}, Count:2, Type:"Normal", EventTime:v1.MicroTime{Time:time.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}}, Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "ingress-nginx-controller-7fcf777cb7-djfgh.1783d3df58e1cb7e" is forbidden: unable to create new content in namespace ingress-nginx because it is being terminated' (will not retry!)
	Sep 11 11:14:03 ingress-addon-legacy-508741 kubelet[1446]: W0911 11:14:03.945778    1446 pod_container_deletor.go:77] Container "aa31c38c9c5b2a78469f5f30113d494f9ff7e4c25338ce4b4d5d120985f4a439" not found in pod's containers
	Sep 11 11:14:05 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:14:05.190961    1446 reconciler.go:196] operationExecutor.UnmountVolume started for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d29d7caf-f1ec-4ce7-b53e-f34d54576705-webhook-cert") pod "d29d7caf-f1ec-4ce7-b53e-f34d54576705" (UID: "d29d7caf-f1ec-4ce7-b53e-f34d54576705")
	Sep 11 11:14:05 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:14:05.191048    1446 reconciler.go:196] operationExecutor.UnmountVolume started for volume "ingress-nginx-token-k8f5p" (UniqueName: "kubernetes.io/secret/d29d7caf-f1ec-4ce7-b53e-f34d54576705-ingress-nginx-token-k8f5p") pod "d29d7caf-f1ec-4ce7-b53e-f34d54576705" (UID: "d29d7caf-f1ec-4ce7-b53e-f34d54576705")
	Sep 11 11:14:05 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:14:05.194040    1446 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d29d7caf-f1ec-4ce7-b53e-f34d54576705-ingress-nginx-token-k8f5p" (OuterVolumeSpecName: "ingress-nginx-token-k8f5p") pod "d29d7caf-f1ec-4ce7-b53e-f34d54576705" (UID: "d29d7caf-f1ec-4ce7-b53e-f34d54576705"). InnerVolumeSpecName "ingress-nginx-token-k8f5p". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:14:05 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:14:05.194707    1446 operation_generator.go:782] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d29d7caf-f1ec-4ce7-b53e-f34d54576705-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "d29d7caf-f1ec-4ce7-b53e-f34d54576705" (UID: "d29d7caf-f1ec-4ce7-b53e-f34d54576705"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 11 11:14:05 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:14:05.291478    1446 reconciler.go:319] Volume detached for volume "ingress-nginx-token-k8f5p" (UniqueName: "kubernetes.io/secret/d29d7caf-f1ec-4ce7-b53e-f34d54576705-ingress-nginx-token-k8f5p") on node "ingress-addon-legacy-508741" DevicePath ""
	Sep 11 11:14:05 ingress-addon-legacy-508741 kubelet[1446]: I0911 11:14:05.291514    1446 reconciler.go:319] Volume detached for volume "webhook-cert" (UniqueName: "kubernetes.io/secret/d29d7caf-f1ec-4ce7-b53e-f34d54576705-webhook-cert") on node "ingress-addon-legacy-508741" DevicePath ""
	Sep 11 11:14:05 ingress-addon-legacy-508741 kubelet[1446]: W0911 11:14:05.487409    1446 kubelet_getters.go:297] Path "/var/lib/kubelet/pods/d29d7caf-f1ec-4ce7-b53e-f34d54576705/volumes" does not exist
	
	* 
	* ==> storage-provisioner [092d16c2a45f2c43ea18093c4ecc9506cddff013f129beebc7bd40a2e9b4f86b] <==
	* I0911 11:11:00.109420       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0911 11:11:30.112034       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	* 
	* ==> storage-provisioner [e7c7916aea0b44cd825176e82f3e9d27a8b73a15257b16a5e1ab6c1f861c4b7d] <==
	* I0911 11:11:30.568769       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 11:11:30.589735       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 11:11:30.589829       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 11:11:30.604395       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 11:11:30.605938       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3d2331ae-856b-4401-a2ad-b3866f9a875e", APIVersion:"v1", ResourceVersion:"530", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ingress-addon-legacy-508741_fbbf9117-50e0-4e30-a9b3-e4cea7bdd3a8 became leader
	I0911 11:11:30.606004       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-508741_fbbf9117-50e0-4e30-a9b3-e4cea7bdd3a8!
	I0911 11:11:30.706920       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ingress-addon-legacy-508741_fbbf9117-50e0-4e30-a9b3-e4cea7bdd3a8!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ingress-addon-legacy-508741 -n ingress-addon-legacy-508741
helpers_test.go:261: (dbg) Run:  kubectl --context ingress-addon-legacy-508741 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestIngressAddonLegacy/serial/ValidateIngressAddons FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestIngressAddonLegacy/serial/ValidateIngressAddons (166.73s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (3.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:552: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- exec busybox-5bc68d56bd-4jnst -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- exec busybox-5bc68d56bd-4jnst -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-378707 -- exec busybox-5bc68d56bd-4jnst -- sh -c "ping -c 1 192.168.39.1": exit status 1 (238.997946ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-4jnst): exit status 1
multinode_test.go:560: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- exec busybox-5bc68d56bd-f9d7x -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:571: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- exec busybox-5bc68d56bd-f9d7x -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:571: (dbg) Non-zero exit: out/minikube-linux-amd64 kubectl -p multinode-378707 -- exec busybox-5bc68d56bd-f9d7x -- sh -c "ping -c 1 192.168.39.1": exit status 1 (179.875953ms)

                                                
                                                
-- stdout --
	PING 192.168.39.1 (192.168.39.1): 56 data bytes

                                                
                                                
-- /stdout --
** stderr ** 
	ping: permission denied (are you root?)
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:572: Failed to ping host (192.168.39.1) from pod (busybox-5bc68d56bd-f9d7x): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-378707 -n multinode-378707
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-378707 logs -n 25: (1.37821264s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-068762 ssh -- ls                    | mount-start-2-068762 | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-068762 ssh --                       | mount-start-2-068762 | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| stop    | -p mount-start-2-068762                           | mount-start-2-068762 | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	| start   | -p mount-start-2-068762                           | mount-start-2-068762 | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	| mount   | /home/jenkins:/minikube-host                      | mount-start-2-068762 | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC |                     |
	|         | --profile mount-start-2-068762                    |                      |         |         |                     |                     |
	|         | --v 0 --9p-version 9p2000.L                       |                      |         |         |                     |                     |
	|         | --gid 0 --ip  --msize 6543                        |                      |         |         |                     |                     |
	|         | --port 46465 --type 9p --uid 0                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-068762 ssh -- ls                    | mount-start-2-068762 | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	|         | /minikube-host                                    |                      |         |         |                     |                     |
	| ssh     | mount-start-2-068762 ssh --                       | mount-start-2-068762 | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	|         | mount | grep 9p                                   |                      |         |         |                     |                     |
	| delete  | -p mount-start-2-068762                           | mount-start-2-068762 | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	| delete  | -p mount-start-1-051535                           | mount-start-1-051535 | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:18 UTC |
	| start   | -p multinode-378707                               | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:18 UTC | 11 Sep 23 11:20 UTC |
	|         | --wait=true --memory=2200                         |                      |         |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |         |         |                     |                     |
	|         | --alsologtostderr                                 |                      |         |         |                     |                     |
	|         | --driver=kvm2                                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio                          |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- apply -f                   | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- rollout                    | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | status deployment/busybox                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- get pods -o                | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- get pods -o                | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- exec                       | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | busybox-5bc68d56bd-4jnst --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- exec                       | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | busybox-5bc68d56bd-f9d7x --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- exec                       | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | busybox-5bc68d56bd-4jnst --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- exec                       | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | busybox-5bc68d56bd-f9d7x --                       |                      |         |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- exec                       | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | busybox-5bc68d56bd-4jnst -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- exec                       | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | busybox-5bc68d56bd-f9d7x -- nslookup              |                      |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- get pods -o                | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- exec                       | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | busybox-5bc68d56bd-4jnst                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- exec                       | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC |                     |
	|         | busybox-5bc68d56bd-4jnst -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- exec                       | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC | 11 Sep 23 11:20 UTC |
	|         | busybox-5bc68d56bd-f9d7x                          |                      |         |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |         |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |         |         |                     |                     |
	| kubectl | -p multinode-378707 -- exec                       | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:20 UTC |                     |
	|         | busybox-5bc68d56bd-f9d7x -- sh                    |                      |         |         |                     |                     |
	|         | -c ping -c 1 192.168.39.1                         |                      |         |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:18:35
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:18:35.757476 2234986 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:18:35.757622 2234986 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:18:35.757630 2234986 out.go:309] Setting ErrFile to fd 2...
	I0911 11:18:35.757637 2234986 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:18:35.757848 2234986 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:18:35.758498 2234986 out.go:303] Setting JSON to false
	I0911 11:18:35.759472 2234986 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":234067,"bootTime":1694197049,"procs":208,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:18:35.759540 2234986 start.go:138] virtualization: kvm guest
	I0911 11:18:35.762054 2234986 out.go:177] * [multinode-378707] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:18:35.764235 2234986 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:18:35.764335 2234986 notify.go:220] Checking for updates...
	I0911 11:18:35.765866 2234986 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:18:35.767451 2234986 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:18:35.769081 2234986 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:18:35.770563 2234986 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:18:35.772063 2234986 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:18:35.773981 2234986 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:18:35.812287 2234986 out.go:177] * Using the kvm2 driver based on user configuration
	I0911 11:18:35.813979 2234986 start.go:298] selected driver: kvm2
	I0911 11:18:35.813994 2234986 start.go:902] validating driver "kvm2" against <nil>
	I0911 11:18:35.814011 2234986 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:18:35.814732 2234986 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:18:35.814855 2234986 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 11:18:35.830541 2234986 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 11:18:35.830604 2234986 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 11:18:35.830830 2234986 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 11:18:35.830869 2234986 cni.go:84] Creating CNI manager for ""
	I0911 11:18:35.830883 2234986 cni.go:136] 0 nodes found, recommending kindnet
	I0911 11:18:35.830905 2234986 start_flags.go:316] Found "CNI" CNI - setting NetworkPlugin=cni
	I0911 11:18:35.830923 2234986 start_flags.go:321] config:
	{Name:multinode-378707 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:18:35.831106 2234986 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:18:35.833345 2234986 out.go:177] * Starting control plane node multinode-378707 in cluster multinode-378707
	I0911 11:18:35.834968 2234986 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:18:35.835010 2234986 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 11:18:35.835025 2234986 cache.go:57] Caching tarball of preloaded images
	I0911 11:18:35.835108 2234986 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 11:18:35.835121 2234986 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 11:18:35.835448 2234986 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/config.json ...
	I0911 11:18:35.835477 2234986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/config.json: {Name:mk4d54a22d2f3a7a9168e8c3ae609bbee7585dd3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:18:35.835626 2234986 start.go:365] acquiring machines lock for multinode-378707: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 11:18:35.835673 2234986 start.go:369] acquired machines lock for "multinode-378707" in 31.321µs
	I0911 11:18:35.835698 2234986 start.go:93] Provisioning new machine with config: &{Name:multinode-378707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 M
ountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 11:18:35.835773 2234986 start.go:125] createHost starting for "" (driver="kvm2")
	I0911 11:18:35.837688 2234986 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 11:18:35.837816 2234986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:18:35.837858 2234986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:18:35.853255 2234986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37849
	I0911 11:18:35.853734 2234986 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:18:35.854367 2234986 main.go:141] libmachine: Using API Version  1
	I0911 11:18:35.854390 2234986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:18:35.854779 2234986 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:18:35.854985 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetMachineName
	I0911 11:18:35.855133 2234986 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:18:35.855297 2234986 start.go:159] libmachine.API.Create for "multinode-378707" (driver="kvm2")
	I0911 11:18:35.855324 2234986 client.go:168] LocalClient.Create starting
	I0911 11:18:35.855351 2234986 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem
	I0911 11:18:35.855390 2234986 main.go:141] libmachine: Decoding PEM data...
	I0911 11:18:35.855407 2234986 main.go:141] libmachine: Parsing certificate...
	I0911 11:18:35.855464 2234986 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem
	I0911 11:18:35.855482 2234986 main.go:141] libmachine: Decoding PEM data...
	I0911 11:18:35.855497 2234986 main.go:141] libmachine: Parsing certificate...
	I0911 11:18:35.855514 2234986 main.go:141] libmachine: Running pre-create checks...
	I0911 11:18:35.855527 2234986 main.go:141] libmachine: (multinode-378707) Calling .PreCreateCheck
	I0911 11:18:35.855866 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetConfigRaw
	I0911 11:18:35.856268 2234986 main.go:141] libmachine: Creating machine...
	I0911 11:18:35.856283 2234986 main.go:141] libmachine: (multinode-378707) Calling .Create
	I0911 11:18:35.856408 2234986 main.go:141] libmachine: (multinode-378707) Creating KVM machine...
	I0911 11:18:35.857934 2234986 main.go:141] libmachine: (multinode-378707) DBG | found existing default KVM network
	I0911 11:18:35.858650 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:35.858519 2235008 network.go:209] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012d790}
	I0911 11:18:35.864702 2234986 main.go:141] libmachine: (multinode-378707) DBG | trying to create private KVM network mk-multinode-378707 192.168.39.0/24...
	I0911 11:18:35.945893 2234986 main.go:141] libmachine: (multinode-378707) DBG | private KVM network mk-multinode-378707 192.168.39.0/24 created
	I0911 11:18:35.945950 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:35.945837 2235008 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:18:35.945966 2234986 main.go:141] libmachine: (multinode-378707) Setting up store path in /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707 ...
	I0911 11:18:35.945990 2234986 main.go:141] libmachine: (multinode-378707) Building disk image from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0911 11:18:35.946011 2234986 main.go:141] libmachine: (multinode-378707) Downloading /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0911 11:18:36.182733 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:36.182515 2235008 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa...
	I0911 11:18:36.407725 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:36.407543 2235008 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/multinode-378707.rawdisk...
	I0911 11:18:36.407761 2234986 main.go:141] libmachine: (multinode-378707) DBG | Writing magic tar header
	I0911 11:18:36.407782 2234986 main.go:141] libmachine: (multinode-378707) DBG | Writing SSH key tar header
	I0911 11:18:36.407791 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:36.407665 2235008 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707 ...
	I0911 11:18:36.407811 2234986 main.go:141] libmachine: (multinode-378707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707
	I0911 11:18:36.407827 2234986 main.go:141] libmachine: (multinode-378707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines
	I0911 11:18:36.407841 2234986 main.go:141] libmachine: (multinode-378707) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707 (perms=drwx------)
	I0911 11:18:36.407860 2234986 main.go:141] libmachine: (multinode-378707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:18:36.407868 2234986 main.go:141] libmachine: (multinode-378707) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines (perms=drwxr-xr-x)
	I0911 11:18:36.407881 2234986 main.go:141] libmachine: (multinode-378707) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube (perms=drwxr-xr-x)
	I0911 11:18:36.407891 2234986 main.go:141] libmachine: (multinode-378707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273
	I0911 11:18:36.407901 2234986 main.go:141] libmachine: (multinode-378707) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273 (perms=drwxrwxr-x)
	I0911 11:18:36.407909 2234986 main.go:141] libmachine: (multinode-378707) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0911 11:18:36.407915 2234986 main.go:141] libmachine: (multinode-378707) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0911 11:18:36.407927 2234986 main.go:141] libmachine: (multinode-378707) Creating domain...
	I0911 11:18:36.407937 2234986 main.go:141] libmachine: (multinode-378707) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0911 11:18:36.407948 2234986 main.go:141] libmachine: (multinode-378707) DBG | Checking permissions on dir: /home/jenkins
	I0911 11:18:36.407959 2234986 main.go:141] libmachine: (multinode-378707) DBG | Checking permissions on dir: /home
	I0911 11:18:36.407971 2234986 main.go:141] libmachine: (multinode-378707) DBG | Skipping /home - not owner
	I0911 11:18:36.409123 2234986 main.go:141] libmachine: (multinode-378707) define libvirt domain using xml: 
	I0911 11:18:36.409152 2234986 main.go:141] libmachine: (multinode-378707) <domain type='kvm'>
	I0911 11:18:36.409165 2234986 main.go:141] libmachine: (multinode-378707)   <name>multinode-378707</name>
	I0911 11:18:36.409186 2234986 main.go:141] libmachine: (multinode-378707)   <memory unit='MiB'>2200</memory>
	I0911 11:18:36.409201 2234986 main.go:141] libmachine: (multinode-378707)   <vcpu>2</vcpu>
	I0911 11:18:36.409220 2234986 main.go:141] libmachine: (multinode-378707)   <features>
	I0911 11:18:36.409246 2234986 main.go:141] libmachine: (multinode-378707)     <acpi/>
	I0911 11:18:36.409267 2234986 main.go:141] libmachine: (multinode-378707)     <apic/>
	I0911 11:18:36.409274 2234986 main.go:141] libmachine: (multinode-378707)     <pae/>
	I0911 11:18:36.409292 2234986 main.go:141] libmachine: (multinode-378707)     
	I0911 11:18:36.409300 2234986 main.go:141] libmachine: (multinode-378707)   </features>
	I0911 11:18:36.409306 2234986 main.go:141] libmachine: (multinode-378707)   <cpu mode='host-passthrough'>
	I0911 11:18:36.409314 2234986 main.go:141] libmachine: (multinode-378707)   
	I0911 11:18:36.409322 2234986 main.go:141] libmachine: (multinode-378707)   </cpu>
	I0911 11:18:36.409327 2234986 main.go:141] libmachine: (multinode-378707)   <os>
	I0911 11:18:36.409340 2234986 main.go:141] libmachine: (multinode-378707)     <type>hvm</type>
	I0911 11:18:36.409369 2234986 main.go:141] libmachine: (multinode-378707)     <boot dev='cdrom'/>
	I0911 11:18:36.409413 2234986 main.go:141] libmachine: (multinode-378707)     <boot dev='hd'/>
	I0911 11:18:36.409426 2234986 main.go:141] libmachine: (multinode-378707)     <bootmenu enable='no'/>
	I0911 11:18:36.409443 2234986 main.go:141] libmachine: (multinode-378707)   </os>
	I0911 11:18:36.409457 2234986 main.go:141] libmachine: (multinode-378707)   <devices>
	I0911 11:18:36.409469 2234986 main.go:141] libmachine: (multinode-378707)     <disk type='file' device='cdrom'>
	I0911 11:18:36.409487 2234986 main.go:141] libmachine: (multinode-378707)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/boot2docker.iso'/>
	I0911 11:18:36.409501 2234986 main.go:141] libmachine: (multinode-378707)       <target dev='hdc' bus='scsi'/>
	I0911 11:18:36.409511 2234986 main.go:141] libmachine: (multinode-378707)       <readonly/>
	I0911 11:18:36.409523 2234986 main.go:141] libmachine: (multinode-378707)     </disk>
	I0911 11:18:36.409538 2234986 main.go:141] libmachine: (multinode-378707)     <disk type='file' device='disk'>
	I0911 11:18:36.409558 2234986 main.go:141] libmachine: (multinode-378707)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0911 11:18:36.409577 2234986 main.go:141] libmachine: (multinode-378707)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/multinode-378707.rawdisk'/>
	I0911 11:18:36.409593 2234986 main.go:141] libmachine: (multinode-378707)       <target dev='hda' bus='virtio'/>
	I0911 11:18:36.409605 2234986 main.go:141] libmachine: (multinode-378707)     </disk>
	I0911 11:18:36.409618 2234986 main.go:141] libmachine: (multinode-378707)     <interface type='network'>
	I0911 11:18:36.409632 2234986 main.go:141] libmachine: (multinode-378707)       <source network='mk-multinode-378707'/>
	I0911 11:18:36.409644 2234986 main.go:141] libmachine: (multinode-378707)       <model type='virtio'/>
	I0911 11:18:36.409661 2234986 main.go:141] libmachine: (multinode-378707)     </interface>
	I0911 11:18:36.409678 2234986 main.go:141] libmachine: (multinode-378707)     <interface type='network'>
	I0911 11:18:36.409690 2234986 main.go:141] libmachine: (multinode-378707)       <source network='default'/>
	I0911 11:18:36.409703 2234986 main.go:141] libmachine: (multinode-378707)       <model type='virtio'/>
	I0911 11:18:36.409716 2234986 main.go:141] libmachine: (multinode-378707)     </interface>
	I0911 11:18:36.409729 2234986 main.go:141] libmachine: (multinode-378707)     <serial type='pty'>
	I0911 11:18:36.409743 2234986 main.go:141] libmachine: (multinode-378707)       <target port='0'/>
	I0911 11:18:36.409758 2234986 main.go:141] libmachine: (multinode-378707)     </serial>
	I0911 11:18:36.409770 2234986 main.go:141] libmachine: (multinode-378707)     <console type='pty'>
	I0911 11:18:36.409782 2234986 main.go:141] libmachine: (multinode-378707)       <target type='serial' port='0'/>
	I0911 11:18:36.409793 2234986 main.go:141] libmachine: (multinode-378707)     </console>
	I0911 11:18:36.409808 2234986 main.go:141] libmachine: (multinode-378707)     <rng model='virtio'>
	I0911 11:18:36.409822 2234986 main.go:141] libmachine: (multinode-378707)       <backend model='random'>/dev/random</backend>
	I0911 11:18:36.409838 2234986 main.go:141] libmachine: (multinode-378707)     </rng>
	I0911 11:18:36.409851 2234986 main.go:141] libmachine: (multinode-378707)     
	I0911 11:18:36.409865 2234986 main.go:141] libmachine: (multinode-378707)     
	I0911 11:18:36.409878 2234986 main.go:141] libmachine: (multinode-378707)   </devices>
	I0911 11:18:36.409889 2234986 main.go:141] libmachine: (multinode-378707) </domain>
	I0911 11:18:36.409905 2234986 main.go:141] libmachine: (multinode-378707) 
	I0911 11:18:36.414567 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:8f:43:c3 in network default
	I0911 11:18:36.415273 2234986 main.go:141] libmachine: (multinode-378707) Ensuring networks are active...
	I0911 11:18:36.415294 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:36.416099 2234986 main.go:141] libmachine: (multinode-378707) Ensuring network default is active
	I0911 11:18:36.416489 2234986 main.go:141] libmachine: (multinode-378707) Ensuring network mk-multinode-378707 is active
	I0911 11:18:36.417046 2234986 main.go:141] libmachine: (multinode-378707) Getting domain xml...
	I0911 11:18:36.417928 2234986 main.go:141] libmachine: (multinode-378707) Creating domain...
	I0911 11:18:37.674412 2234986 main.go:141] libmachine: (multinode-378707) Waiting to get IP...
	I0911 11:18:37.675231 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:37.675613 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:37.675693 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:37.675604 2235008 retry.go:31] will retry after 262.999946ms: waiting for machine to come up
	I0911 11:18:37.940408 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:37.940944 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:37.940973 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:37.940869 2235008 retry.go:31] will retry after 234.863101ms: waiting for machine to come up
	I0911 11:18:38.177647 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:38.178118 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:38.178150 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:38.178047 2235008 retry.go:31] will retry after 429.407268ms: waiting for machine to come up
	I0911 11:18:38.608701 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:38.609268 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:38.609307 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:38.609210 2235008 retry.go:31] will retry after 507.375885ms: waiting for machine to come up
	I0911 11:18:39.118167 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:39.118690 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:39.118718 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:39.118650 2235008 retry.go:31] will retry after 570.118005ms: waiting for machine to come up
	I0911 11:18:39.690187 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:39.690701 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:39.690726 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:39.690639 2235008 retry.go:31] will retry after 913.208953ms: waiting for machine to come up
	I0911 11:18:40.605894 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:40.606273 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:40.606307 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:40.606211 2235008 retry.go:31] will retry after 844.715236ms: waiting for machine to come up
	I0911 11:18:41.453200 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:41.453634 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:41.453667 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:41.453588 2235008 retry.go:31] will retry after 1.295748216s: waiting for machine to come up
	I0911 11:18:42.751210 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:42.751622 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:42.751656 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:42.751560 2235008 retry.go:31] will retry after 1.389131583s: waiting for machine to come up
	I0911 11:18:44.143207 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:44.143669 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:44.143696 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:44.143614 2235008 retry.go:31] will retry after 1.405896585s: waiting for machine to come up
	I0911 11:18:45.551483 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:45.551926 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:45.551951 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:45.551872 2235008 retry.go:31] will retry after 2.904096164s: waiting for machine to come up
	I0911 11:18:48.459231 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:48.459670 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:48.459730 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:48.459602 2235008 retry.go:31] will retry after 2.621047142s: waiting for machine to come up
	I0911 11:18:51.082136 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:51.082668 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:51.082701 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:51.082615 2235008 retry.go:31] will retry after 3.785819449s: waiting for machine to come up
	I0911 11:18:54.869875 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:54.870287 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:18:54.870314 2234986 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:18:54.870207 2235008 retry.go:31] will retry after 5.10742645s: waiting for machine to come up
	I0911 11:18:59.982771 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:59.983338 2234986 main.go:141] libmachine: (multinode-378707) Found IP for machine: 192.168.39.237
	I0911 11:18:59.983362 2234986 main.go:141] libmachine: (multinode-378707) Reserving static IP address...
	I0911 11:18:59.983372 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has current primary IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:18:59.983989 2234986 main.go:141] libmachine: (multinode-378707) DBG | unable to find host DHCP lease matching {name: "multinode-378707", mac: "52:54:00:57:31:1a", ip: "192.168.39.237"} in network mk-multinode-378707
	I0911 11:19:00.089632 2234986 main.go:141] libmachine: (multinode-378707) DBG | Getting to WaitForSSH function...
	I0911 11:19:00.089670 2234986 main.go:141] libmachine: (multinode-378707) Reserved static IP address: 192.168.39.237
	I0911 11:19:00.089689 2234986 main.go:141] libmachine: (multinode-378707) Waiting for SSH to be available...
	I0911 11:19:00.092738 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.093250 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:minikube Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:00.093290 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.093409 2234986 main.go:141] libmachine: (multinode-378707) DBG | Using SSH client type: external
	I0911 11:19:00.093440 2234986 main.go:141] libmachine: (multinode-378707) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa (-rw-------)
	I0911 11:19:00.093496 2234986 main.go:141] libmachine: (multinode-378707) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.237 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 11:19:00.093519 2234986 main.go:141] libmachine: (multinode-378707) DBG | About to run SSH command:
	I0911 11:19:00.093537 2234986 main.go:141] libmachine: (multinode-378707) DBG | exit 0
	I0911 11:19:00.193089 2234986 main.go:141] libmachine: (multinode-378707) DBG | SSH cmd err, output: <nil>: 
	I0911 11:19:00.193377 2234986 main.go:141] libmachine: (multinode-378707) KVM machine creation complete!
	I0911 11:19:00.193830 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetConfigRaw
	I0911 11:19:00.194507 2234986 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:19:00.194744 2234986 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:19:00.194932 2234986 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0911 11:19:00.194948 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetState
	I0911 11:19:00.196772 2234986 main.go:141] libmachine: Detecting operating system of created instance...
	I0911 11:19:00.196795 2234986 main.go:141] libmachine: Waiting for SSH to be available...
	I0911 11:19:00.196804 2234986 main.go:141] libmachine: Getting to WaitForSSH function...
	I0911 11:19:00.196832 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:00.199888 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.200356 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:00.200389 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.200682 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:19:00.200971 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:00.201194 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:00.201366 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:19:00.201610 2234986 main.go:141] libmachine: Using SSH client type: native
	I0911 11:19:00.202086 2234986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0911 11:19:00.202101 2234986 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0911 11:19:00.336801 2234986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:19:00.336855 2234986 main.go:141] libmachine: Detecting the provisioner...
	I0911 11:19:00.336870 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:00.340381 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.340786 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:00.340855 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.341076 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:19:00.341333 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:00.341531 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:00.341714 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:19:00.341920 2234986 main.go:141] libmachine: Using SSH client type: native
	I0911 11:19:00.342424 2234986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0911 11:19:00.342442 2234986 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0911 11:19:00.478205 2234986 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0911 11:19:00.478296 2234986 main.go:141] libmachine: found compatible host: buildroot
	I0911 11:19:00.478313 2234986 main.go:141] libmachine: Provisioning with buildroot...
	I0911 11:19:00.478329 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetMachineName
	I0911 11:19:00.478672 2234986 buildroot.go:166] provisioning hostname "multinode-378707"
	I0911 11:19:00.478707 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetMachineName
	I0911 11:19:00.478938 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:00.481798 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.482308 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:00.482350 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.482523 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:19:00.482754 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:00.482942 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:00.483134 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:19:00.483354 2234986 main.go:141] libmachine: Using SSH client type: native
	I0911 11:19:00.483750 2234986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0911 11:19:00.483764 2234986 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-378707 && echo "multinode-378707" | sudo tee /etc/hostname
	I0911 11:19:00.634300 2234986 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-378707
	
	I0911 11:19:00.634335 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:00.637681 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.638074 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:00.638104 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.638332 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:19:00.638570 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:00.638807 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:00.638949 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:19:00.639141 2234986 main.go:141] libmachine: Using SSH client type: native
	I0911 11:19:00.639565 2234986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0911 11:19:00.639583 2234986 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-378707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-378707/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-378707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:19:00.782268 2234986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:19:00.782322 2234986 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 11:19:00.782373 2234986 buildroot.go:174] setting up certificates
	I0911 11:19:00.782385 2234986 provision.go:83] configureAuth start
	I0911 11:19:00.782401 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetMachineName
	I0911 11:19:00.782744 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetIP
	I0911 11:19:00.786238 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.786637 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:00.786665 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.786905 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:00.789864 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.790470 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:00.790519 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.790673 2234986 provision.go:138] copyHostCerts
	I0911 11:19:00.790727 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:19:00.790776 2234986 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 11:19:00.790786 2234986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:19:00.790850 2234986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 11:19:00.790972 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:19:00.790996 2234986 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 11:19:00.791000 2234986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:19:00.791020 2234986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 11:19:00.791072 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:19:00.791094 2234986 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 11:19:00.791100 2234986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:19:00.791117 2234986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 11:19:00.791183 2234986 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.multinode-378707 san=[192.168.39.237 192.168.39.237 localhost 127.0.0.1 minikube multinode-378707]
	I0911 11:19:00.917625 2234986 provision.go:172] copyRemoteCerts
	I0911 11:19:00.917687 2234986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:19:00.917719 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:00.920779 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.921122 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:00.921169 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:00.921360 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:19:00.921591 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:00.921772 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:19:00.921897 2234986 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:19:01.019693 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0911 11:19:01.019768 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:19:01.045455 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0911 11:19:01.045559 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0911 11:19:01.071233 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0911 11:19:01.071332 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 11:19:01.096643 2234986 provision.go:86] duration metric: configureAuth took 314.238923ms
	I0911 11:19:01.096678 2234986 buildroot.go:189] setting minikube options for container-runtime
	I0911 11:19:01.096944 2234986 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:19:01.097037 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:01.100160 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.100497 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:01.100536 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.100726 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:19:01.100992 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:01.101302 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:01.101463 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:19:01.101696 2234986 main.go:141] libmachine: Using SSH client type: native
	I0911 11:19:01.102201 2234986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0911 11:19:01.102230 2234986 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:19:01.449295 2234986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:19:01.449326 2234986 main.go:141] libmachine: Checking connection to Docker...
	I0911 11:19:01.449338 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetURL
	I0911 11:19:01.450785 2234986 main.go:141] libmachine: (multinode-378707) DBG | Using libvirt version 6000000
	I0911 11:19:01.453195 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.453531 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:01.453573 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.453711 2234986 main.go:141] libmachine: Docker is up and running!
	I0911 11:19:01.453724 2234986 main.go:141] libmachine: Reticulating splines...
	I0911 11:19:01.453731 2234986 client.go:171] LocalClient.Create took 25.598400419s
	I0911 11:19:01.453757 2234986 start.go:167] duration metric: libmachine.API.Create for "multinode-378707" took 25.598462716s
	I0911 11:19:01.453768 2234986 start.go:300] post-start starting for "multinode-378707" (driver="kvm2")
	I0911 11:19:01.453785 2234986 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:19:01.453805 2234986 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:19:01.454084 2234986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:19:01.454123 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:01.456439 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.457030 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:01.457062 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.457246 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:19:01.457459 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:01.457636 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:19:01.457821 2234986 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:19:01.555676 2234986 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:19:01.560287 2234986 command_runner.go:130] > NAME=Buildroot
	I0911 11:19:01.560325 2234986 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0911 11:19:01.560345 2234986 command_runner.go:130] > ID=buildroot
	I0911 11:19:01.560353 2234986 command_runner.go:130] > VERSION_ID=2021.02.12
	I0911 11:19:01.560364 2234986 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0911 11:19:01.560472 2234986 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 11:19:01.560501 2234986 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 11:19:01.560578 2234986 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 11:19:01.560684 2234986 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 11:19:01.560702 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> /etc/ssl/certs/22224712.pem
	I0911 11:19:01.560837 2234986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:19:01.570146 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:19:01.596993 2234986 start.go:303] post-start completed in 143.209835ms
	I0911 11:19:01.597048 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetConfigRaw
	I0911 11:19:01.597794 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetIP
	I0911 11:19:01.600701 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.601095 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:01.601123 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.601433 2234986 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/config.json ...
	I0911 11:19:01.601638 2234986 start.go:128] duration metric: createHost completed in 25.765847306s
	I0911 11:19:01.601665 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:01.604531 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.605163 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:01.605198 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.605463 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:19:01.605687 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:01.605888 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:01.606044 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:19:01.606266 2234986 main.go:141] libmachine: Using SSH client type: native
	I0911 11:19:01.606855 2234986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0911 11:19:01.606870 2234986 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 11:19:01.742211 2234986 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694431141.724555914
	
	I0911 11:19:01.742259 2234986 fix.go:206] guest clock: 1694431141.724555914
	I0911 11:19:01.742269 2234986 fix.go:219] Guest: 2023-09-11 11:19:01.724555914 +0000 UTC Remote: 2023-09-11 11:19:01.601652239 +0000 UTC m=+25.879004405 (delta=122.903675ms)
	I0911 11:19:01.742313 2234986 fix.go:190] guest clock delta is within tolerance: 122.903675ms
	I0911 11:19:01.742322 2234986 start.go:83] releasing machines lock for "multinode-378707", held for 25.906636232s
	I0911 11:19:01.742356 2234986 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:19:01.742679 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetIP
	I0911 11:19:01.745733 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.746183 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:01.746219 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.746453 2234986 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:19:01.747131 2234986 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:19:01.747392 2234986 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:19:01.747552 2234986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:19:01.747617 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:01.747685 2234986 ssh_runner.go:195] Run: cat /version.json
	I0911 11:19:01.747717 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:01.751008 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.751042 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.751343 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:01.751382 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.751404 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:01.751414 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:01.751645 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:19:01.751787 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:19:01.751863 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:01.751930 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:01.752031 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:19:01.752125 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:19:01.752198 2234986 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:19:01.752240 2234986 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:19:01.875164 2234986 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0911 11:19:01.876259 2234986 command_runner.go:130] > {"iso_version": "v1.31.0-1692872107-17120", "kicbase_version": "v0.0.40-1692613578-17086", "minikube_version": "v1.31.2", "commit": "9dc31f0284dc1a8a35859648c60120733f0f8296"}
	I0911 11:19:01.876415 2234986 ssh_runner.go:195] Run: systemctl --version
	I0911 11:19:01.882454 2234986 command_runner.go:130] > systemd 247 (247)
	I0911 11:19:01.882489 2234986 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0911 11:19:01.882688 2234986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:19:02.057054 2234986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:19:02.064183 2234986 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0911 11:19:02.064232 2234986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 11:19:02.064292 2234986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:19:02.080570 2234986 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0911 11:19:02.080625 2234986 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 11:19:02.080638 2234986 start.go:466] detecting cgroup driver to use...
	I0911 11:19:02.080729 2234986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:19:02.096091 2234986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:19:02.109786 2234986 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:19:02.109850 2234986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:19:02.123370 2234986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:19:02.137388 2234986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:19:02.250004 2234986 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0911 11:19:02.250108 2234986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:19:02.379547 2234986 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0911 11:19:02.379596 2234986 docker.go:212] disabling docker service ...
	I0911 11:19:02.379660 2234986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:19:02.396314 2234986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:19:02.408498 2234986 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0911 11:19:02.409068 2234986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:19:02.423752 2234986 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0911 11:19:02.529803 2234986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:19:02.653426 2234986 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0911 11:19:02.653462 2234986 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0911 11:19:02.653541 2234986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:19:02.666624 2234986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:19:02.686463 2234986 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0911 11:19:02.686503 2234986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:19:02.686563 2234986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:19:02.696977 2234986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:19:02.697050 2234986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:19:02.707969 2234986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:19:02.718862 2234986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:19:02.729732 2234986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:19:02.740872 2234986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:19:02.749961 2234986 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 11:19:02.750015 2234986 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 11:19:02.750066 2234986 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 11:19:02.763610 2234986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:19:02.772835 2234986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:19:02.891969 2234986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:19:03.092387 2234986 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:19:03.092482 2234986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:19:03.098359 2234986 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0911 11:19:03.098392 2234986 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0911 11:19:03.098399 2234986 command_runner.go:130] > Device: 16h/22d	Inode: 721         Links: 1
	I0911 11:19:03.098406 2234986 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:19:03.098411 2234986 command_runner.go:130] > Access: 2023-09-11 11:19:03.054804457 +0000
	I0911 11:19:03.098418 2234986 command_runner.go:130] > Modify: 2023-09-11 11:19:03.054804457 +0000
	I0911 11:19:03.098423 2234986 command_runner.go:130] > Change: 2023-09-11 11:19:03.054804457 +0000
	I0911 11:19:03.098427 2234986 command_runner.go:130] >  Birth: -
	I0911 11:19:03.098446 2234986 start.go:534] Will wait 60s for crictl version
	I0911 11:19:03.098492 2234986 ssh_runner.go:195] Run: which crictl
	I0911 11:19:03.102805 2234986 command_runner.go:130] > /usr/bin/crictl
	I0911 11:19:03.102940 2234986 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:19:03.135070 2234986 command_runner.go:130] > Version:  0.1.0
	I0911 11:19:03.135096 2234986 command_runner.go:130] > RuntimeName:  cri-o
	I0911 11:19:03.135101 2234986 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0911 11:19:03.135107 2234986 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0911 11:19:03.136391 2234986 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 11:19:03.136469 2234986 ssh_runner.go:195] Run: crio --version
	I0911 11:19:03.185335 2234986 command_runner.go:130] > crio version 1.24.1
	I0911 11:19:03.185360 2234986 command_runner.go:130] > Version:          1.24.1
	I0911 11:19:03.185367 2234986 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0911 11:19:03.185372 2234986 command_runner.go:130] > GitTreeState:     dirty
	I0911 11:19:03.185381 2234986 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0911 11:19:03.185385 2234986 command_runner.go:130] > GoVersion:        go1.19.9
	I0911 11:19:03.185389 2234986 command_runner.go:130] > Compiler:         gc
	I0911 11:19:03.185393 2234986 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:19:03.185400 2234986 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:19:03.185407 2234986 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:19:03.185414 2234986 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:19:03.185418 2234986 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:19:03.186977 2234986 ssh_runner.go:195] Run: crio --version
	I0911 11:19:03.237271 2234986 command_runner.go:130] > crio version 1.24.1
	I0911 11:19:03.237294 2234986 command_runner.go:130] > Version:          1.24.1
	I0911 11:19:03.237302 2234986 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0911 11:19:03.237307 2234986 command_runner.go:130] > GitTreeState:     dirty
	I0911 11:19:03.237321 2234986 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0911 11:19:03.237325 2234986 command_runner.go:130] > GoVersion:        go1.19.9
	I0911 11:19:03.237329 2234986 command_runner.go:130] > Compiler:         gc
	I0911 11:19:03.237334 2234986 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:19:03.237342 2234986 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:19:03.237350 2234986 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:19:03.237355 2234986 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:19:03.237359 2234986 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:19:03.239656 2234986 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 11:19:03.241281 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetIP
	I0911 11:19:03.244504 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:03.244949 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:03.244998 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:03.245229 2234986 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 11:19:03.249606 2234986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:19:03.263785 2234986 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:19:03.263846 2234986 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:19:03.291792 2234986 command_runner.go:130] > {
	I0911 11:19:03.291821 2234986 command_runner.go:130] >   "images": [
	I0911 11:19:03.291827 2234986 command_runner.go:130] >   ]
	I0911 11:19:03.291832 2234986 command_runner.go:130] > }
	I0911 11:19:03.292912 2234986 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 11:19:03.292984 2234986 ssh_runner.go:195] Run: which lz4
	I0911 11:19:03.297265 2234986 command_runner.go:130] > /usr/bin/lz4
	I0911 11:19:03.297302 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0911 11:19:03.297387 2234986 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 11:19:03.301811 2234986 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 11:19:03.301862 2234986 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 11:19:03.301890 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 11:19:05.186204 2234986 crio.go:444] Took 1.888837 seconds to copy over tarball
	I0911 11:19:05.186319 2234986 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 11:19:08.122543 2234986 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.936193353s)
	I0911 11:19:08.122573 2234986 crio.go:451] Took 2.936335 seconds to extract the tarball
	I0911 11:19:08.122584 2234986 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 11:19:08.167258 2234986 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:19:08.234976 2234986 command_runner.go:130] > {
	I0911 11:19:08.235007 2234986 command_runner.go:130] >   "images": [
	I0911 11:19:08.235014 2234986 command_runner.go:130] >     {
	I0911 11:19:08.235026 2234986 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0911 11:19:08.235033 2234986 command_runner.go:130] >       "repoTags": [
	I0911 11:19:08.235042 2234986 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0911 11:19:08.235048 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235063 2234986 command_runner.go:130] >       "repoDigests": [
	I0911 11:19:08.235080 2234986 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0911 11:19:08.235097 2234986 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0911 11:19:08.235102 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235113 2234986 command_runner.go:130] >       "size": "65249302",
	I0911 11:19:08.235120 2234986 command_runner.go:130] >       "uid": null,
	I0911 11:19:08.235129 2234986 command_runner.go:130] >       "username": "",
	I0911 11:19:08.235137 2234986 command_runner.go:130] >       "spec": null
	I0911 11:19:08.235142 2234986 command_runner.go:130] >     },
	I0911 11:19:08.235146 2234986 command_runner.go:130] >     {
	I0911 11:19:08.235152 2234986 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0911 11:19:08.235156 2234986 command_runner.go:130] >       "repoTags": [
	I0911 11:19:08.235161 2234986 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0911 11:19:08.235165 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235168 2234986 command_runner.go:130] >       "repoDigests": [
	I0911 11:19:08.235176 2234986 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0911 11:19:08.235185 2234986 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0911 11:19:08.235189 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235196 2234986 command_runner.go:130] >       "size": "31470524",
	I0911 11:19:08.235203 2234986 command_runner.go:130] >       "uid": null,
	I0911 11:19:08.235215 2234986 command_runner.go:130] >       "username": "",
	I0911 11:19:08.235221 2234986 command_runner.go:130] >       "spec": null
	I0911 11:19:08.235225 2234986 command_runner.go:130] >     },
	I0911 11:19:08.235236 2234986 command_runner.go:130] >     {
	I0911 11:19:08.235244 2234986 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0911 11:19:08.235250 2234986 command_runner.go:130] >       "repoTags": [
	I0911 11:19:08.235256 2234986 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0911 11:19:08.235262 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235266 2234986 command_runner.go:130] >       "repoDigests": [
	I0911 11:19:08.235276 2234986 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0911 11:19:08.235286 2234986 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0911 11:19:08.235289 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235296 2234986 command_runner.go:130] >       "size": "53621675",
	I0911 11:19:08.235300 2234986 command_runner.go:130] >       "uid": null,
	I0911 11:19:08.235307 2234986 command_runner.go:130] >       "username": "",
	I0911 11:19:08.235311 2234986 command_runner.go:130] >       "spec": null
	I0911 11:19:08.235320 2234986 command_runner.go:130] >     },
	I0911 11:19:08.235326 2234986 command_runner.go:130] >     {
	I0911 11:19:08.235332 2234986 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0911 11:19:08.235339 2234986 command_runner.go:130] >       "repoTags": [
	I0911 11:19:08.235344 2234986 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0911 11:19:08.235350 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235354 2234986 command_runner.go:130] >       "repoDigests": [
	I0911 11:19:08.235363 2234986 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0911 11:19:08.235372 2234986 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0911 11:19:08.235378 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235382 2234986 command_runner.go:130] >       "size": "295456551",
	I0911 11:19:08.235386 2234986 command_runner.go:130] >       "uid": {
	I0911 11:19:08.235392 2234986 command_runner.go:130] >         "value": "0"
	I0911 11:19:08.235402 2234986 command_runner.go:130] >       },
	I0911 11:19:08.235408 2234986 command_runner.go:130] >       "username": "",
	I0911 11:19:08.235412 2234986 command_runner.go:130] >       "spec": null
	I0911 11:19:08.235418 2234986 command_runner.go:130] >     },
	I0911 11:19:08.235422 2234986 command_runner.go:130] >     {
	I0911 11:19:08.235432 2234986 command_runner.go:130] >       "id": "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77",
	I0911 11:19:08.235439 2234986 command_runner.go:130] >       "repoTags": [
	I0911 11:19:08.235444 2234986 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0911 11:19:08.235451 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235455 2234986 command_runner.go:130] >       "repoDigests": [
	I0911 11:19:08.235465 2234986 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774",
	I0911 11:19:08.235474 2234986 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0911 11:19:08.235480 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235484 2234986 command_runner.go:130] >       "size": "126972880",
	I0911 11:19:08.235491 2234986 command_runner.go:130] >       "uid": {
	I0911 11:19:08.235495 2234986 command_runner.go:130] >         "value": "0"
	I0911 11:19:08.235498 2234986 command_runner.go:130] >       },
	I0911 11:19:08.235505 2234986 command_runner.go:130] >       "username": "",
	I0911 11:19:08.235509 2234986 command_runner.go:130] >       "spec": null
	I0911 11:19:08.235514 2234986 command_runner.go:130] >     },
	I0911 11:19:08.235518 2234986 command_runner.go:130] >     {
	I0911 11:19:08.235526 2234986 command_runner.go:130] >       "id": "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac",
	I0911 11:19:08.235532 2234986 command_runner.go:130] >       "repoTags": [
	I0911 11:19:08.235540 2234986 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0911 11:19:08.235546 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235551 2234986 command_runner.go:130] >       "repoDigests": [
	I0911 11:19:08.235560 2234986 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830",
	I0911 11:19:08.235570 2234986 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0911 11:19:08.235575 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235580 2234986 command_runner.go:130] >       "size": "123163446",
	I0911 11:19:08.235586 2234986 command_runner.go:130] >       "uid": {
	I0911 11:19:08.235590 2234986 command_runner.go:130] >         "value": "0"
	I0911 11:19:08.235593 2234986 command_runner.go:130] >       },
	I0911 11:19:08.235597 2234986 command_runner.go:130] >       "username": "",
	I0911 11:19:08.235602 2234986 command_runner.go:130] >       "spec": null
	I0911 11:19:08.235606 2234986 command_runner.go:130] >     },
	I0911 11:19:08.235610 2234986 command_runner.go:130] >     {
	I0911 11:19:08.235616 2234986 command_runner.go:130] >       "id": "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5",
	I0911 11:19:08.235622 2234986 command_runner.go:130] >       "repoTags": [
	I0911 11:19:08.235627 2234986 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0911 11:19:08.235633 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235640 2234986 command_runner.go:130] >       "repoDigests": [
	I0911 11:19:08.235650 2234986 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3",
	I0911 11:19:08.235659 2234986 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"
	I0911 11:19:08.235666 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235670 2234986 command_runner.go:130] >       "size": "74680215",
	I0911 11:19:08.235676 2234986 command_runner.go:130] >       "uid": null,
	I0911 11:19:08.235680 2234986 command_runner.go:130] >       "username": "",
	I0911 11:19:08.235684 2234986 command_runner.go:130] >       "spec": null
	I0911 11:19:08.235690 2234986 command_runner.go:130] >     },
	I0911 11:19:08.235693 2234986 command_runner.go:130] >     {
	I0911 11:19:08.235702 2234986 command_runner.go:130] >       "id": "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a",
	I0911 11:19:08.235708 2234986 command_runner.go:130] >       "repoTags": [
	I0911 11:19:08.235713 2234986 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0911 11:19:08.235719 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235723 2234986 command_runner.go:130] >       "repoDigests": [
	I0911 11:19:08.235736 2234986 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4",
	I0911 11:19:08.235768 2234986 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"
	I0911 11:19:08.235778 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235784 2234986 command_runner.go:130] >       "size": "61477686",
	I0911 11:19:08.235788 2234986 command_runner.go:130] >       "uid": {
	I0911 11:19:08.235792 2234986 command_runner.go:130] >         "value": "0"
	I0911 11:19:08.235795 2234986 command_runner.go:130] >       },
	I0911 11:19:08.235802 2234986 command_runner.go:130] >       "username": "",
	I0911 11:19:08.235806 2234986 command_runner.go:130] >       "spec": null
	I0911 11:19:08.235812 2234986 command_runner.go:130] >     },
	I0911 11:19:08.235815 2234986 command_runner.go:130] >     {
	I0911 11:19:08.235823 2234986 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0911 11:19:08.235829 2234986 command_runner.go:130] >       "repoTags": [
	I0911 11:19:08.235834 2234986 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0911 11:19:08.235843 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235850 2234986 command_runner.go:130] >       "repoDigests": [
	I0911 11:19:08.235865 2234986 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0911 11:19:08.235880 2234986 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0911 11:19:08.235888 2234986 command_runner.go:130] >       ],
	I0911 11:19:08.235895 2234986 command_runner.go:130] >       "size": "750414",
	I0911 11:19:08.235905 2234986 command_runner.go:130] >       "uid": {
	I0911 11:19:08.235915 2234986 command_runner.go:130] >         "value": "65535"
	I0911 11:19:08.235924 2234986 command_runner.go:130] >       },
	I0911 11:19:08.235931 2234986 command_runner.go:130] >       "username": "",
	I0911 11:19:08.235944 2234986 command_runner.go:130] >       "spec": null
	I0911 11:19:08.235953 2234986 command_runner.go:130] >     }
	I0911 11:19:08.235958 2234986 command_runner.go:130] >   ]
	I0911 11:19:08.235963 2234986 command_runner.go:130] > }
	I0911 11:19:08.236092 2234986 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:19:08.236105 2234986 cache_images.go:84] Images are preloaded, skipping loading
	I0911 11:19:08.236172 2234986 ssh_runner.go:195] Run: crio config
	I0911 11:19:08.289333 2234986 command_runner.go:130] ! time="2023-09-11 11:19:08.280837362Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0911 11:19:08.289370 2234986 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0911 11:19:08.305330 2234986 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0911 11:19:08.305377 2234986 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0911 11:19:08.305388 2234986 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0911 11:19:08.305394 2234986 command_runner.go:130] > #
	I0911 11:19:08.305405 2234986 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0911 11:19:08.305415 2234986 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0911 11:19:08.305429 2234986 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0911 11:19:08.305448 2234986 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0911 11:19:08.305456 2234986 command_runner.go:130] > # reload'.
	I0911 11:19:08.305464 2234986 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0911 11:19:08.305473 2234986 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0911 11:19:08.305479 2234986 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0911 11:19:08.305487 2234986 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0911 11:19:08.305491 2234986 command_runner.go:130] > [crio]
	I0911 11:19:08.305497 2234986 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0911 11:19:08.305504 2234986 command_runner.go:130] > # containers images, in this directory.
	I0911 11:19:08.305512 2234986 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0911 11:19:08.305523 2234986 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0911 11:19:08.305530 2234986 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0911 11:19:08.305536 2234986 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0911 11:19:08.305544 2234986 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0911 11:19:08.305550 2234986 command_runner.go:130] > storage_driver = "overlay"
	I0911 11:19:08.305558 2234986 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0911 11:19:08.305565 2234986 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0911 11:19:08.305574 2234986 command_runner.go:130] > storage_option = [
	I0911 11:19:08.305581 2234986 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0911 11:19:08.305587 2234986 command_runner.go:130] > ]
	I0911 11:19:08.305594 2234986 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0911 11:19:08.305602 2234986 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0911 11:19:08.305606 2234986 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0911 11:19:08.305614 2234986 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0911 11:19:08.305623 2234986 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0911 11:19:08.305629 2234986 command_runner.go:130] > # always happen on a node reboot
	I0911 11:19:08.305634 2234986 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0911 11:19:08.305642 2234986 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0911 11:19:08.305648 2234986 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0911 11:19:08.305659 2234986 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0911 11:19:08.305666 2234986 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0911 11:19:08.305674 2234986 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0911 11:19:08.305683 2234986 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0911 11:19:08.305689 2234986 command_runner.go:130] > # internal_wipe = true
	I0911 11:19:08.305695 2234986 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0911 11:19:08.305707 2234986 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0911 11:19:08.305715 2234986 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0911 11:19:08.305720 2234986 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0911 11:19:08.305729 2234986 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0911 11:19:08.305734 2234986 command_runner.go:130] > [crio.api]
	I0911 11:19:08.305740 2234986 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0911 11:19:08.305747 2234986 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0911 11:19:08.305752 2234986 command_runner.go:130] > # IP address on which the stream server will listen.
	I0911 11:19:08.305756 2234986 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0911 11:19:08.305765 2234986 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0911 11:19:08.305772 2234986 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0911 11:19:08.305781 2234986 command_runner.go:130] > # stream_port = "0"
	I0911 11:19:08.305789 2234986 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0911 11:19:08.305799 2234986 command_runner.go:130] > # stream_enable_tls = false
	I0911 11:19:08.305809 2234986 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0911 11:19:08.305819 2234986 command_runner.go:130] > # stream_idle_timeout = ""
	I0911 11:19:08.305828 2234986 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0911 11:19:08.305841 2234986 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0911 11:19:08.305848 2234986 command_runner.go:130] > # minutes.
	I0911 11:19:08.305858 2234986 command_runner.go:130] > # stream_tls_cert = ""
	I0911 11:19:08.305867 2234986 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0911 11:19:08.305880 2234986 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0911 11:19:08.305890 2234986 command_runner.go:130] > # stream_tls_key = ""
	I0911 11:19:08.305899 2234986 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0911 11:19:08.305911 2234986 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0911 11:19:08.305923 2234986 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0911 11:19:08.305929 2234986 command_runner.go:130] > # stream_tls_ca = ""
	I0911 11:19:08.305937 2234986 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:19:08.305944 2234986 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0911 11:19:08.305951 2234986 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:19:08.305958 2234986 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0911 11:19:08.305975 2234986 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0911 11:19:08.305983 2234986 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0911 11:19:08.305988 2234986 command_runner.go:130] > [crio.runtime]
	I0911 11:19:08.305994 2234986 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0911 11:19:08.306002 2234986 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0911 11:19:08.306010 2234986 command_runner.go:130] > # "nofile=1024:2048"
	I0911 11:19:08.306016 2234986 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0911 11:19:08.306022 2234986 command_runner.go:130] > # default_ulimits = [
	I0911 11:19:08.306026 2234986 command_runner.go:130] > # ]
	I0911 11:19:08.306031 2234986 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0911 11:19:08.306037 2234986 command_runner.go:130] > # no_pivot = false
	I0911 11:19:08.306043 2234986 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0911 11:19:08.306051 2234986 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0911 11:19:08.306058 2234986 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0911 11:19:08.306064 2234986 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0911 11:19:08.306071 2234986 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0911 11:19:08.306078 2234986 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:19:08.306084 2234986 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0911 11:19:08.306089 2234986 command_runner.go:130] > # Cgroup setting for conmon
	I0911 11:19:08.306096 2234986 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0911 11:19:08.306102 2234986 command_runner.go:130] > conmon_cgroup = "pod"
	I0911 11:19:08.306108 2234986 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0911 11:19:08.306120 2234986 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0911 11:19:08.306129 2234986 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:19:08.306135 2234986 command_runner.go:130] > conmon_env = [
	I0911 11:19:08.306141 2234986 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0911 11:19:08.306146 2234986 command_runner.go:130] > ]
	I0911 11:19:08.306152 2234986 command_runner.go:130] > # Additional environment variables to set for all the
	I0911 11:19:08.306159 2234986 command_runner.go:130] > # containers. These are overridden if set in the
	I0911 11:19:08.306165 2234986 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0911 11:19:08.306171 2234986 command_runner.go:130] > # default_env = [
	I0911 11:19:08.306174 2234986 command_runner.go:130] > # ]
	I0911 11:19:08.306182 2234986 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0911 11:19:08.306190 2234986 command_runner.go:130] > # selinux = false
	I0911 11:19:08.306196 2234986 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0911 11:19:08.306204 2234986 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0911 11:19:08.306210 2234986 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0911 11:19:08.306216 2234986 command_runner.go:130] > # seccomp_profile = ""
	I0911 11:19:08.306222 2234986 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0911 11:19:08.306230 2234986 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0911 11:19:08.306238 2234986 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0911 11:19:08.306245 2234986 command_runner.go:130] > # which might increase security.
	I0911 11:19:08.306252 2234986 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0911 11:19:08.306258 2234986 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0911 11:19:08.306266 2234986 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0911 11:19:08.306274 2234986 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0911 11:19:08.306283 2234986 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0911 11:19:08.306290 2234986 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:19:08.306294 2234986 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0911 11:19:08.306302 2234986 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0911 11:19:08.306306 2234986 command_runner.go:130] > # the cgroup blockio controller.
	I0911 11:19:08.306312 2234986 command_runner.go:130] > # blockio_config_file = ""
	I0911 11:19:08.306318 2234986 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0911 11:19:08.306324 2234986 command_runner.go:130] > # irqbalance daemon.
	I0911 11:19:08.306329 2234986 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0911 11:19:08.306338 2234986 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0911 11:19:08.306344 2234986 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:19:08.306348 2234986 command_runner.go:130] > # rdt_config_file = ""
	I0911 11:19:08.306353 2234986 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0911 11:19:08.306360 2234986 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0911 11:19:08.306366 2234986 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0911 11:19:08.306373 2234986 command_runner.go:130] > # separate_pull_cgroup = ""
	I0911 11:19:08.306379 2234986 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0911 11:19:08.306387 2234986 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0911 11:19:08.306391 2234986 command_runner.go:130] > # will be added.
	I0911 11:19:08.306397 2234986 command_runner.go:130] > # default_capabilities = [
	I0911 11:19:08.306401 2234986 command_runner.go:130] > # 	"CHOWN",
	I0911 11:19:08.306407 2234986 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0911 11:19:08.306411 2234986 command_runner.go:130] > # 	"FSETID",
	I0911 11:19:08.306417 2234986 command_runner.go:130] > # 	"FOWNER",
	I0911 11:19:08.306421 2234986 command_runner.go:130] > # 	"SETGID",
	I0911 11:19:08.306427 2234986 command_runner.go:130] > # 	"SETUID",
	I0911 11:19:08.306430 2234986 command_runner.go:130] > # 	"SETPCAP",
	I0911 11:19:08.306436 2234986 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0911 11:19:08.306442 2234986 command_runner.go:130] > # 	"KILL",
	I0911 11:19:08.306447 2234986 command_runner.go:130] > # ]
	I0911 11:19:08.306454 2234986 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0911 11:19:08.306462 2234986 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:19:08.306468 2234986 command_runner.go:130] > # default_sysctls = [
	I0911 11:19:08.306472 2234986 command_runner.go:130] > # ]
	I0911 11:19:08.306478 2234986 command_runner.go:130] > # List of devices on the host that a
	I0911 11:19:08.306484 2234986 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0911 11:19:08.306490 2234986 command_runner.go:130] > # allowed_devices = [
	I0911 11:19:08.306494 2234986 command_runner.go:130] > # 	"/dev/fuse",
	I0911 11:19:08.306499 2234986 command_runner.go:130] > # ]
	I0911 11:19:08.306504 2234986 command_runner.go:130] > # List of additional devices. specified as
	I0911 11:19:08.306513 2234986 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0911 11:19:08.306521 2234986 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0911 11:19:08.306539 2234986 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:19:08.306546 2234986 command_runner.go:130] > # additional_devices = [
	I0911 11:19:08.306549 2234986 command_runner.go:130] > # ]
	I0911 11:19:08.306557 2234986 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0911 11:19:08.306562 2234986 command_runner.go:130] > # cdi_spec_dirs = [
	I0911 11:19:08.306566 2234986 command_runner.go:130] > # 	"/etc/cdi",
	I0911 11:19:08.306572 2234986 command_runner.go:130] > # 	"/var/run/cdi",
	I0911 11:19:08.306575 2234986 command_runner.go:130] > # ]
	I0911 11:19:08.306584 2234986 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0911 11:19:08.306592 2234986 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0911 11:19:08.306598 2234986 command_runner.go:130] > # Defaults to false.
	I0911 11:19:08.306603 2234986 command_runner.go:130] > # device_ownership_from_security_context = false
	I0911 11:19:08.306611 2234986 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0911 11:19:08.306618 2234986 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0911 11:19:08.306625 2234986 command_runner.go:130] > # hooks_dir = [
	I0911 11:19:08.306629 2234986 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0911 11:19:08.306635 2234986 command_runner.go:130] > # ]
	I0911 11:19:08.306641 2234986 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0911 11:19:08.306649 2234986 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0911 11:19:08.306656 2234986 command_runner.go:130] > # its default mounts from the following two files:
	I0911 11:19:08.306660 2234986 command_runner.go:130] > #
	I0911 11:19:08.306668 2234986 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0911 11:19:08.306676 2234986 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0911 11:19:08.306684 2234986 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0911 11:19:08.306693 2234986 command_runner.go:130] > #
	I0911 11:19:08.306701 2234986 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0911 11:19:08.306711 2234986 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0911 11:19:08.306719 2234986 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0911 11:19:08.306726 2234986 command_runner.go:130] > #      only add mounts it finds in this file.
	I0911 11:19:08.306729 2234986 command_runner.go:130] > #
	I0911 11:19:08.306736 2234986 command_runner.go:130] > # default_mounts_file = ""
	I0911 11:19:08.306741 2234986 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0911 11:19:08.306749 2234986 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0911 11:19:08.306753 2234986 command_runner.go:130] > pids_limit = 1024
	I0911 11:19:08.306759 2234986 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0911 11:19:08.306765 2234986 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0911 11:19:08.306773 2234986 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0911 11:19:08.306787 2234986 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0911 11:19:08.306796 2234986 command_runner.go:130] > # log_size_max = -1
	I0911 11:19:08.306807 2234986 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0911 11:19:08.306818 2234986 command_runner.go:130] > # log_to_journald = false
	I0911 11:19:08.306827 2234986 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0911 11:19:08.306838 2234986 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0911 11:19:08.306850 2234986 command_runner.go:130] > # Path to directory for container attach sockets.
	I0911 11:19:08.306861 2234986 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0911 11:19:08.306869 2234986 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0911 11:19:08.306879 2234986 command_runner.go:130] > # bind_mount_prefix = ""
	I0911 11:19:08.306886 2234986 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0911 11:19:08.306892 2234986 command_runner.go:130] > # read_only = false
	I0911 11:19:08.306898 2234986 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0911 11:19:08.306909 2234986 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0911 11:19:08.306916 2234986 command_runner.go:130] > # live configuration reload.
	I0911 11:19:08.306920 2234986 command_runner.go:130] > # log_level = "info"
	I0911 11:19:08.306928 2234986 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0911 11:19:08.306933 2234986 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:19:08.306939 2234986 command_runner.go:130] > # log_filter = ""
	I0911 11:19:08.306945 2234986 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0911 11:19:08.306953 2234986 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0911 11:19:08.306957 2234986 command_runner.go:130] > # separated by comma.
	I0911 11:19:08.306963 2234986 command_runner.go:130] > # uid_mappings = ""
	I0911 11:19:08.306969 2234986 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0911 11:19:08.306978 2234986 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0911 11:19:08.306988 2234986 command_runner.go:130] > # separated by comma.
	I0911 11:19:08.306994 2234986 command_runner.go:130] > # gid_mappings = ""
	I0911 11:19:08.307000 2234986 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0911 11:19:08.307008 2234986 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:19:08.307016 2234986 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:19:08.307022 2234986 command_runner.go:130] > # minimum_mappable_uid = -1
	I0911 11:19:08.307028 2234986 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0911 11:19:08.307036 2234986 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:19:08.307044 2234986 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:19:08.307048 2234986 command_runner.go:130] > # minimum_mappable_gid = -1
	I0911 11:19:08.307056 2234986 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0911 11:19:08.307064 2234986 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0911 11:19:08.307070 2234986 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0911 11:19:08.307076 2234986 command_runner.go:130] > # ctr_stop_timeout = 30
	I0911 11:19:08.307082 2234986 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0911 11:19:08.307090 2234986 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0911 11:19:08.307097 2234986 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0911 11:19:08.307102 2234986 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0911 11:19:08.307111 2234986 command_runner.go:130] > drop_infra_ctr = false
	I0911 11:19:08.307122 2234986 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0911 11:19:08.307130 2234986 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0911 11:19:08.307138 2234986 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0911 11:19:08.307144 2234986 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0911 11:19:08.307150 2234986 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0911 11:19:08.307157 2234986 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0911 11:19:08.307161 2234986 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0911 11:19:08.307170 2234986 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0911 11:19:08.307177 2234986 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0911 11:19:08.307183 2234986 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0911 11:19:08.307191 2234986 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0911 11:19:08.307199 2234986 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0911 11:19:08.307205 2234986 command_runner.go:130] > # default_runtime = "runc"
	I0911 11:19:08.307210 2234986 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0911 11:19:08.307220 2234986 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0911 11:19:08.307230 2234986 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0911 11:19:08.307249 2234986 command_runner.go:130] > # creation as a file is not desired either.
	I0911 11:19:08.307261 2234986 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0911 11:19:08.307268 2234986 command_runner.go:130] > # the hostname is being managed dynamically.
	I0911 11:19:08.307273 2234986 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0911 11:19:08.307279 2234986 command_runner.go:130] > # ]
	I0911 11:19:08.307285 2234986 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0911 11:19:08.307293 2234986 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0911 11:19:08.307302 2234986 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0911 11:19:08.307310 2234986 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0911 11:19:08.307314 2234986 command_runner.go:130] > #
	I0911 11:19:08.307318 2234986 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0911 11:19:08.307325 2234986 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0911 11:19:08.307329 2234986 command_runner.go:130] > #  runtime_type = "oci"
	I0911 11:19:08.307338 2234986 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0911 11:19:08.307345 2234986 command_runner.go:130] > #  privileged_without_host_devices = false
	I0911 11:19:08.307349 2234986 command_runner.go:130] > #  allowed_annotations = []
	I0911 11:19:08.307355 2234986 command_runner.go:130] > # Where:
	I0911 11:19:08.307360 2234986 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0911 11:19:08.307369 2234986 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0911 11:19:08.307377 2234986 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0911 11:19:08.307385 2234986 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0911 11:19:08.307391 2234986 command_runner.go:130] > #   in $PATH.
	I0911 11:19:08.307397 2234986 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0911 11:19:08.307403 2234986 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0911 11:19:08.307409 2234986 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0911 11:19:08.307415 2234986 command_runner.go:130] > #   state.
	I0911 11:19:08.307421 2234986 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0911 11:19:08.307429 2234986 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0911 11:19:08.307437 2234986 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0911 11:19:08.307445 2234986 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0911 11:19:08.307454 2234986 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0911 11:19:08.307462 2234986 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0911 11:19:08.307469 2234986 command_runner.go:130] > #   The currently recognized values are:
	I0911 11:19:08.307475 2234986 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0911 11:19:08.307484 2234986 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0911 11:19:08.307492 2234986 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0911 11:19:08.307502 2234986 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0911 11:19:08.307512 2234986 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0911 11:19:08.307518 2234986 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0911 11:19:08.307526 2234986 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0911 11:19:08.307535 2234986 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0911 11:19:08.307542 2234986 command_runner.go:130] > #   should be moved to the container's cgroup
	I0911 11:19:08.307546 2234986 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0911 11:19:08.307553 2234986 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0911 11:19:08.307559 2234986 command_runner.go:130] > runtime_type = "oci"
	I0911 11:19:08.307566 2234986 command_runner.go:130] > runtime_root = "/run/runc"
	I0911 11:19:08.307570 2234986 command_runner.go:130] > runtime_config_path = ""
	I0911 11:19:08.307576 2234986 command_runner.go:130] > monitor_path = ""
	I0911 11:19:08.307580 2234986 command_runner.go:130] > monitor_cgroup = ""
	I0911 11:19:08.307586 2234986 command_runner.go:130] > monitor_exec_cgroup = ""
	I0911 11:19:08.307592 2234986 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0911 11:19:08.307598 2234986 command_runner.go:130] > # running containers
	I0911 11:19:08.307603 2234986 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0911 11:19:08.307608 2234986 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0911 11:19:08.307643 2234986 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0911 11:19:08.307651 2234986 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0911 11:19:08.307659 2234986 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0911 11:19:08.307664 2234986 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0911 11:19:08.307670 2234986 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0911 11:19:08.307675 2234986 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0911 11:19:08.307682 2234986 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0911 11:19:08.307687 2234986 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0911 11:19:08.307695 2234986 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0911 11:19:08.307702 2234986 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0911 11:19:08.307710 2234986 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0911 11:19:08.307720 2234986 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0911 11:19:08.307729 2234986 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0911 11:19:08.307736 2234986 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0911 11:19:08.307747 2234986 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0911 11:19:08.307760 2234986 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0911 11:19:08.307768 2234986 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0911 11:19:08.307774 2234986 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0911 11:19:08.307786 2234986 command_runner.go:130] > # Example:
	I0911 11:19:08.307797 2234986 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0911 11:19:08.307809 2234986 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0911 11:19:08.307817 2234986 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0911 11:19:08.307828 2234986 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0911 11:19:08.307837 2234986 command_runner.go:130] > # cpuset = 0
	I0911 11:19:08.307846 2234986 command_runner.go:130] > # cpushares = "0-1"
	I0911 11:19:08.307855 2234986 command_runner.go:130] > # Where:
	I0911 11:19:08.307862 2234986 command_runner.go:130] > # The workload name is workload-type.
	I0911 11:19:08.307876 2234986 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0911 11:19:08.307887 2234986 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0911 11:19:08.307897 2234986 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0911 11:19:08.307912 2234986 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0911 11:19:08.307922 2234986 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0911 11:19:08.307926 2234986 command_runner.go:130] > # 
	I0911 11:19:08.307932 2234986 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0911 11:19:08.307937 2234986 command_runner.go:130] > #
	I0911 11:19:08.307943 2234986 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0911 11:19:08.307951 2234986 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0911 11:19:08.307958 2234986 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0911 11:19:08.307966 2234986 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0911 11:19:08.307971 2234986 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0911 11:19:08.307978 2234986 command_runner.go:130] > [crio.image]
	I0911 11:19:08.307983 2234986 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0911 11:19:08.307990 2234986 command_runner.go:130] > # default_transport = "docker://"
	I0911 11:19:08.307996 2234986 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0911 11:19:08.308004 2234986 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:19:08.308008 2234986 command_runner.go:130] > # global_auth_file = ""
	I0911 11:19:08.308014 2234986 command_runner.go:130] > # The image used to instantiate infra containers.
	I0911 11:19:08.308018 2234986 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:19:08.308026 2234986 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0911 11:19:08.308032 2234986 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0911 11:19:08.308038 2234986 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:19:08.308043 2234986 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:19:08.308047 2234986 command_runner.go:130] > # pause_image_auth_file = ""
	I0911 11:19:08.308053 2234986 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0911 11:19:08.308060 2234986 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0911 11:19:08.308071 2234986 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0911 11:19:08.308076 2234986 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0911 11:19:08.308080 2234986 command_runner.go:130] > # pause_command = "/pause"
	I0911 11:19:08.308085 2234986 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0911 11:19:08.308091 2234986 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0911 11:19:08.308097 2234986 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0911 11:19:08.308102 2234986 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0911 11:19:08.308107 2234986 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0911 11:19:08.308111 2234986 command_runner.go:130] > # signature_policy = ""
	I0911 11:19:08.308120 2234986 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0911 11:19:08.308126 2234986 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0911 11:19:08.308130 2234986 command_runner.go:130] > # changing them here.
	I0911 11:19:08.308133 2234986 command_runner.go:130] > # insecure_registries = [
	I0911 11:19:08.308136 2234986 command_runner.go:130] > # ]
	I0911 11:19:08.308146 2234986 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0911 11:19:08.308153 2234986 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0911 11:19:08.308160 2234986 command_runner.go:130] > # image_volumes = "mkdir"
	I0911 11:19:08.308165 2234986 command_runner.go:130] > # Temporary directory to use for storing big files
	I0911 11:19:08.308172 2234986 command_runner.go:130] > # big_files_temporary_dir = ""
	I0911 11:19:08.308178 2234986 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0911 11:19:08.308184 2234986 command_runner.go:130] > # CNI plugins.
	I0911 11:19:08.308189 2234986 command_runner.go:130] > [crio.network]
	I0911 11:19:08.308197 2234986 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0911 11:19:08.308202 2234986 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0911 11:19:08.308213 2234986 command_runner.go:130] > # cni_default_network = ""
	I0911 11:19:08.308218 2234986 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0911 11:19:08.308225 2234986 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0911 11:19:08.308231 2234986 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0911 11:19:08.308237 2234986 command_runner.go:130] > # plugin_dirs = [
	I0911 11:19:08.308241 2234986 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0911 11:19:08.308247 2234986 command_runner.go:130] > # ]
	I0911 11:19:08.308253 2234986 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0911 11:19:08.308259 2234986 command_runner.go:130] > [crio.metrics]
	I0911 11:19:08.308265 2234986 command_runner.go:130] > # Globally enable or disable metrics support.
	I0911 11:19:08.308294 2234986 command_runner.go:130] > enable_metrics = true
	I0911 11:19:08.308316 2234986 command_runner.go:130] > # Specify enabled metrics collectors.
	I0911 11:19:08.308323 2234986 command_runner.go:130] > # Per default all metrics are enabled.
	I0911 11:19:08.308329 2234986 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0911 11:19:08.308337 2234986 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0911 11:19:08.308345 2234986 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0911 11:19:08.308353 2234986 command_runner.go:130] > # metrics_collectors = [
	I0911 11:19:08.308360 2234986 command_runner.go:130] > # 	"operations",
	I0911 11:19:08.308365 2234986 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0911 11:19:08.308372 2234986 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0911 11:19:08.308376 2234986 command_runner.go:130] > # 	"operations_errors",
	I0911 11:19:08.308383 2234986 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0911 11:19:08.308387 2234986 command_runner.go:130] > # 	"image_pulls_by_name",
	I0911 11:19:08.308393 2234986 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0911 11:19:08.308397 2234986 command_runner.go:130] > # 	"image_pulls_failures",
	I0911 11:19:08.308404 2234986 command_runner.go:130] > # 	"image_pulls_successes",
	I0911 11:19:08.308408 2234986 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0911 11:19:08.308414 2234986 command_runner.go:130] > # 	"image_layer_reuse",
	I0911 11:19:08.308419 2234986 command_runner.go:130] > # 	"containers_oom_total",
	I0911 11:19:08.308425 2234986 command_runner.go:130] > # 	"containers_oom",
	I0911 11:19:08.308430 2234986 command_runner.go:130] > # 	"processes_defunct",
	I0911 11:19:08.308436 2234986 command_runner.go:130] > # 	"operations_total",
	I0911 11:19:08.308440 2234986 command_runner.go:130] > # 	"operations_latency_seconds",
	I0911 11:19:08.308447 2234986 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0911 11:19:08.308451 2234986 command_runner.go:130] > # 	"operations_errors_total",
	I0911 11:19:08.308458 2234986 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0911 11:19:08.308462 2234986 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0911 11:19:08.308469 2234986 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0911 11:19:08.308473 2234986 command_runner.go:130] > # 	"image_pulls_success_total",
	I0911 11:19:08.308479 2234986 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0911 11:19:08.308484 2234986 command_runner.go:130] > # 	"containers_oom_count_total",
	I0911 11:19:08.308489 2234986 command_runner.go:130] > # ]
	I0911 11:19:08.308495 2234986 command_runner.go:130] > # The port on which the metrics server will listen.
	I0911 11:19:08.308501 2234986 command_runner.go:130] > # metrics_port = 9090
	I0911 11:19:08.308506 2234986 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0911 11:19:08.308512 2234986 command_runner.go:130] > # metrics_socket = ""
	I0911 11:19:08.308517 2234986 command_runner.go:130] > # The certificate for the secure metrics server.
	I0911 11:19:08.308526 2234986 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0911 11:19:08.308535 2234986 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0911 11:19:08.308542 2234986 command_runner.go:130] > # certificate on any modification event.
	I0911 11:19:08.308546 2234986 command_runner.go:130] > # metrics_cert = ""
	I0911 11:19:08.308553 2234986 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0911 11:19:08.308558 2234986 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0911 11:19:08.308564 2234986 command_runner.go:130] > # metrics_key = ""
	I0911 11:19:08.308570 2234986 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0911 11:19:08.308575 2234986 command_runner.go:130] > [crio.tracing]
	I0911 11:19:08.308581 2234986 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0911 11:19:08.308587 2234986 command_runner.go:130] > # enable_tracing = false
	I0911 11:19:08.308593 2234986 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0911 11:19:08.308599 2234986 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0911 11:19:08.308604 2234986 command_runner.go:130] > # Number of samples to collect per million spans.
	I0911 11:19:08.308613 2234986 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0911 11:19:08.308621 2234986 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0911 11:19:08.308627 2234986 command_runner.go:130] > [crio.stats]
	I0911 11:19:08.308633 2234986 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0911 11:19:08.308641 2234986 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0911 11:19:08.308647 2234986 command_runner.go:130] > # stats_collection_period = 0
	I0911 11:19:08.308716 2234986 cni.go:84] Creating CNI manager for ""
	I0911 11:19:08.308729 2234986 cni.go:136] 1 nodes found, recommending kindnet
	I0911 11:19:08.308748 2234986 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:19:08.308771 2234986 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-378707 NodeName:multinode-378707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:19:08.309063 2234986 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-378707"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:19:08.309158 2234986 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-378707 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:19:08.309217 2234986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:19:08.320343 2234986 command_runner.go:130] > kubeadm
	I0911 11:19:08.320369 2234986 command_runner.go:130] > kubectl
	I0911 11:19:08.320374 2234986 command_runner.go:130] > kubelet
	I0911 11:19:08.320399 2234986 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:19:08.320455 2234986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:19:08.330292 2234986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0911 11:19:08.347030 2234986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:19:08.364470 2234986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0911 11:19:08.381998 2234986 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I0911 11:19:08.386315 2234986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:19:08.399175 2234986 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707 for IP: 192.168.39.237
	I0911 11:19:08.399237 2234986 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:19:08.399429 2234986 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 11:19:08.399467 2234986 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 11:19:08.399515 2234986 certs.go:319] generating minikube-user signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key
	I0911 11:19:08.399536 2234986 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt with IP's: []
	I0911 11:19:08.555391 2234986 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt ...
	I0911 11:19:08.555434 2234986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt: {Name:mk791ee279c087902a4f46290a8c02aa745d5abb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:19:08.555630 2234986 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key ...
	I0911 11:19:08.555642 2234986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key: {Name:mk7885774791f142832cb79fb5486951c89cb63d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:19:08.555728 2234986 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.key.cf509944
	I0911 11:19:08.555749 2234986 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.crt.cf509944 with IP's: [192.168.39.237 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 11:19:08.688040 2234986 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.crt.cf509944 ...
	I0911 11:19:08.688077 2234986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.crt.cf509944: {Name:mk81417bff9b4ace24a00fcde161670bcbf9656f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:19:08.688256 2234986 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.key.cf509944 ...
	I0911 11:19:08.688269 2234986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.key.cf509944: {Name:mk9088f2d6d9570cc5c17eb50a8cbe171dc8a144 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:19:08.688350 2234986 certs.go:337] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.crt.cf509944 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.crt
	I0911 11:19:08.688418 2234986 certs.go:341] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.key.cf509944 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.key
	I0911 11:19:08.688467 2234986 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.key
	I0911 11:19:08.688480 2234986 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.crt with IP's: []
	I0911 11:19:08.761268 2234986 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.crt ...
	I0911 11:19:08.761298 2234986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.crt: {Name:mkedae569507d1508af4deb12a11130b0483c63b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:19:08.761473 2234986 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.key ...
	I0911 11:19:08.761484 2234986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.key: {Name:mk55fb3e54748129fd8057fb41ecf64943fca527 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:19:08.761552 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0911 11:19:08.761571 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0911 11:19:08.761581 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0911 11:19:08.761593 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0911 11:19:08.761603 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0911 11:19:08.761617 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0911 11:19:08.761633 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0911 11:19:08.761645 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0911 11:19:08.761704 2234986 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 11:19:08.761742 2234986 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 11:19:08.761755 2234986 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 11:19:08.761778 2234986 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:19:08.761801 2234986 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:19:08.761820 2234986 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 11:19:08.761859 2234986 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:19:08.761891 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> /usr/share/ca-certificates/22224712.pem
	I0911 11:19:08.761904 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:19:08.761916 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem -> /usr/share/ca-certificates/2222471.pem
	I0911 11:19:08.762517 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:19:08.789555 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 11:19:08.814356 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:19:08.839228 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 11:19:08.863724 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:19:08.888308 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 11:19:08.912332 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:19:08.937113 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:19:08.961572 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 11:19:08.987025 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:19:09.012380 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 11:19:09.036682 2234986 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:19:09.056005 2234986 ssh_runner.go:195] Run: openssl version
	I0911 11:19:09.061590 2234986 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0911 11:19:09.061858 2234986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 11:19:09.073169 2234986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 11:19:09.078142 2234986 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:19:09.078175 2234986 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:19:09.078219 2234986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 11:19:09.083781 2234986 command_runner.go:130] > 51391683
	I0911 11:19:09.084048 2234986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 11:19:09.095343 2234986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 11:19:09.107264 2234986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 11:19:09.112463 2234986 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:19:09.112646 2234986 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:19:09.112712 2234986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 11:19:09.119007 2234986 command_runner.go:130] > 3ec20f2e
	I0911 11:19:09.119088 2234986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:19:09.129980 2234986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:19:09.140730 2234986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:19:09.146137 2234986 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:19:09.146172 2234986 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:19:09.146237 2234986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:19:09.152117 2234986 command_runner.go:130] > b5213941
	I0911 11:19:09.152207 2234986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:19:09.162574 2234986 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:19:09.167064 2234986 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:19:09.167104 2234986 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:19:09.167150 2234986 kubeadm.go:404] StartCluster: {Name:multinode-378707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOpti
ons:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:19:09.167235 2234986 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:19:09.167291 2234986 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:19:09.198226 2234986 cri.go:89] found id: ""
	I0911 11:19:09.198304 2234986 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 11:19:09.208459 2234986 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0911 11:19:09.208486 2234986 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0911 11:19:09.208495 2234986 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0911 11:19:09.208581 2234986 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 11:19:09.218885 2234986 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 11:19:09.228902 2234986 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0911 11:19:09.228928 2234986 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0911 11:19:09.228935 2234986 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0911 11:19:09.228948 2234986 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:19:09.229019 2234986 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:19:09.229072 2234986 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 11:19:09.620969 2234986 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 11:19:09.621011 2234986 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 11:19:21.992165 2234986 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 11:19:21.992196 2234986 command_runner.go:130] > [init] Using Kubernetes version: v1.28.1
	I0911 11:19:21.992236 2234986 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 11:19:21.992241 2234986 command_runner.go:130] > [preflight] Running pre-flight checks
	I0911 11:19:21.992351 2234986 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 11:19:21.992380 2234986 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 11:19:21.992519 2234986 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 11:19:21.992536 2234986 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 11:19:21.992662 2234986 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 11:19:21.992675 2234986 command_runner.go:130] > [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 11:19:21.992760 2234986 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 11:19:21.992787 2234986 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 11:19:21.994617 2234986 out.go:204]   - Generating certificates and keys ...
	I0911 11:19:21.994721 2234986 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0911 11:19:21.994731 2234986 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 11:19:21.994794 2234986 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0911 11:19:21.994802 2234986 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 11:19:21.994884 2234986 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 11:19:21.994892 2234986 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 11:19:21.994963 2234986 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0911 11:19:21.994975 2234986 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 11:19:21.995051 2234986 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0911 11:19:21.995060 2234986 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 11:19:21.995130 2234986 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0911 11:19:21.995150 2234986 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 11:19:21.995217 2234986 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0911 11:19:21.995227 2234986 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 11:19:21.995388 2234986 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-378707] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0911 11:19:21.995399 2234986 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-378707] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0911 11:19:21.995478 2234986 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0911 11:19:21.995487 2234986 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 11:19:21.995637 2234986 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-378707] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0911 11:19:21.995651 2234986 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-378707] and IPs [192.168.39.237 127.0.0.1 ::1]
	I0911 11:19:21.995745 2234986 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 11:19:21.995767 2234986 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 11:19:21.995875 2234986 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 11:19:21.995886 2234986 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 11:19:21.995946 2234986 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0911 11:19:21.995955 2234986 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 11:19:21.996039 2234986 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 11:19:21.996054 2234986 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 11:19:21.996109 2234986 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 11:19:21.996118 2234986 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 11:19:21.996196 2234986 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 11:19:21.996208 2234986 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 11:19:21.996289 2234986 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 11:19:21.996299 2234986 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 11:19:21.996365 2234986 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 11:19:21.996374 2234986 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 11:19:21.996452 2234986 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 11:19:21.996459 2234986 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 11:19:21.996537 2234986 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 11:19:21.996549 2234986 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 11:19:21.999592 2234986 out.go:204]   - Booting up control plane ...
	I0911 11:19:21.999703 2234986 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 11:19:21.999724 2234986 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 11:19:21.999805 2234986 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 11:19:21.999813 2234986 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 11:19:21.999870 2234986 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 11:19:21.999876 2234986 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 11:19:21.999956 2234986 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:19:21.999968 2234986 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:19:22.000098 2234986 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:19:22.000117 2234986 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:19:22.000165 2234986 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0911 11:19:22.000184 2234986 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 11:19:22.000372 2234986 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 11:19:22.000380 2234986 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 11:19:22.000466 2234986 command_runner.go:130] > [apiclient] All control plane components are healthy after 8.004239 seconds
	I0911 11:19:22.000481 2234986 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.004239 seconds
	I0911 11:19:22.000606 2234986 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 11:19:22.000614 2234986 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 11:19:22.000748 2234986 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 11:19:22.000769 2234986 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 11:19:22.000868 2234986 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0911 11:19:22.000878 2234986 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 11:19:22.001071 2234986 command_runner.go:130] > [mark-control-plane] Marking the node multinode-378707 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 11:19:22.001079 2234986 kubeadm.go:322] [mark-control-plane] Marking the node multinode-378707 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 11:19:22.001123 2234986 command_runner.go:130] > [bootstrap-token] Using token: 5b50bk.y3lqp9t1vx5v6r5k
	I0911 11:19:22.001128 2234986 kubeadm.go:322] [bootstrap-token] Using token: 5b50bk.y3lqp9t1vx5v6r5k
	I0911 11:19:22.002807 2234986 out.go:204]   - Configuring RBAC rules ...
	I0911 11:19:22.002914 2234986 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 11:19:22.002924 2234986 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 11:19:22.003005 2234986 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 11:19:22.003031 2234986 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 11:19:22.003193 2234986 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 11:19:22.003201 2234986 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 11:19:22.003359 2234986 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 11:19:22.003363 2234986 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 11:19:22.003525 2234986 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 11:19:22.003538 2234986 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 11:19:22.003647 2234986 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 11:19:22.003663 2234986 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 11:19:22.003806 2234986 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 11:19:22.003816 2234986 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 11:19:22.003872 2234986 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 11:19:22.003886 2234986 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0911 11:19:22.003944 2234986 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 11:19:22.003953 2234986 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0911 11:19:22.003964 2234986 kubeadm.go:322] 
	I0911 11:19:22.004057 2234986 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 11:19:22.004070 2234986 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0911 11:19:22.004076 2234986 kubeadm.go:322] 
	I0911 11:19:22.004177 2234986 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 11:19:22.004189 2234986 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0911 11:19:22.004195 2234986 kubeadm.go:322] 
	I0911 11:19:22.004228 2234986 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 11:19:22.004241 2234986 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0911 11:19:22.004319 2234986 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 11:19:22.004330 2234986 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 11:19:22.004404 2234986 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 11:19:22.004413 2234986 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 11:19:22.004418 2234986 kubeadm.go:322] 
	I0911 11:19:22.004489 2234986 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 11:19:22.004498 2234986 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0911 11:19:22.004504 2234986 kubeadm.go:322] 
	I0911 11:19:22.004601 2234986 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 11:19:22.004620 2234986 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 11:19:22.004627 2234986 kubeadm.go:322] 
	I0911 11:19:22.004700 2234986 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 11:19:22.004711 2234986 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0911 11:19:22.004804 2234986 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 11:19:22.004829 2234986 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 11:19:22.004914 2234986 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 11:19:22.004924 2234986 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 11:19:22.004929 2234986 kubeadm.go:322] 
	I0911 11:19:22.005029 2234986 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 11:19:22.005040 2234986 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0911 11:19:22.005135 2234986 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 11:19:22.005144 2234986 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0911 11:19:22.005150 2234986 kubeadm.go:322] 
	I0911 11:19:22.005245 2234986 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 5b50bk.y3lqp9t1vx5v6r5k \
	I0911 11:19:22.005255 2234986 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token 5b50bk.y3lqp9t1vx5v6r5k \
	I0911 11:19:22.005396 2234986 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 11:19:22.005411 2234986 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 11:19:22.005458 2234986 kubeadm.go:322] 	--control-plane 
	I0911 11:19:22.005474 2234986 command_runner.go:130] > 	--control-plane 
	I0911 11:19:22.005481 2234986 kubeadm.go:322] 
	I0911 11:19:22.005591 2234986 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 11:19:22.005604 2234986 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0911 11:19:22.005609 2234986 kubeadm.go:322] 
	I0911 11:19:22.005717 2234986 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 5b50bk.y3lqp9t1vx5v6r5k \
	I0911 11:19:22.005726 2234986 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token 5b50bk.y3lqp9t1vx5v6r5k \
	I0911 11:19:22.005859 2234986 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 11:19:22.005875 2234986 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 11:19:22.005911 2234986 cni.go:84] Creating CNI manager for ""
	I0911 11:19:22.005931 2234986 cni.go:136] 1 nodes found, recommending kindnet
	I0911 11:19:22.007684 2234986 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0911 11:19:22.009083 2234986 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0911 11:19:22.026597 2234986 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0911 11:19:22.026631 2234986 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0911 11:19:22.026642 2234986 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0911 11:19:22.026652 2234986 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:19:22.026661 2234986 command_runner.go:130] > Access: 2023-09-11 11:18:50.196948160 +0000
	I0911 11:19:22.026674 2234986 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0911 11:19:22.026682 2234986 command_runner.go:130] > Change: 2023-09-11 11:18:48.236948160 +0000
	I0911 11:19:22.026692 2234986 command_runner.go:130] >  Birth: -
	I0911 11:19:22.026755 2234986 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0911 11:19:22.026771 2234986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0911 11:19:22.102824 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0911 11:19:23.219977 2234986 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0911 11:19:23.238387 2234986 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0911 11:19:23.258806 2234986 command_runner.go:130] > serviceaccount/kindnet created
	I0911 11:19:23.277631 2234986 command_runner.go:130] > daemonset.apps/kindnet created
	I0911 11:19:23.280318 2234986 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.177450042s)
	I0911 11:19:23.280384 2234986 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 11:19:23.280472 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:23.280551 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=multinode-378707 minikube.k8s.io/updated_at=2023_09_11T11_19_23_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:23.457094 2234986 command_runner.go:130] > node/multinode-378707 labeled
	I0911 11:19:23.458800 2234986 command_runner.go:130] > -16
	I0911 11:19:23.458862 2234986 ops.go:34] apiserver oom_adj: -16
	I0911 11:19:23.458925 2234986 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0911 11:19:23.459060 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:23.544553 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:23.544718 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:23.633660 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:24.134540 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:24.221564 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:24.633953 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:24.723194 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:25.134450 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:25.227421 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:25.633966 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:25.719984 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:26.134576 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:26.221019 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:26.634057 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:26.722116 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:27.134147 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:27.219326 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:27.633964 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:27.725926 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:28.134631 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:28.251122 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:28.634861 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:28.723016 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:29.134700 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:29.238629 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:29.634154 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:29.719182 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:30.134001 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:30.224718 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:30.634807 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:30.750077 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:31.134523 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:31.225771 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:31.634421 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:31.736278 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:32.134867 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:32.255342 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:32.634911 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:32.738065 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:33.134748 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:33.228879 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:33.634365 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:33.741081 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:34.133932 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:34.244451 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:34.634532 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:34.785491 2234986 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0911 11:19:35.133971 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 11:19:35.227969 2234986 command_runner.go:130] > NAME      SECRETS   AGE
	I0911 11:19:35.227996 2234986 command_runner.go:130] > default   0         1s
	I0911 11:19:35.229566 2234986 kubeadm.go:1081] duration metric: took 11.949159927s to wait for elevateKubeSystemPrivileges.
	I0911 11:19:35.229624 2234986 kubeadm.go:406] StartCluster complete in 26.062477862s
	I0911 11:19:35.229648 2234986 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:19:35.229743 2234986 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:19:35.230398 2234986 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:19:35.230657 2234986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 11:19:35.230762 2234986 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 11:19:35.230888 2234986 addons.go:69] Setting storage-provisioner=true in profile "multinode-378707"
	I0911 11:19:35.230895 2234986 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:19:35.230910 2234986 addons.go:231] Setting addon storage-provisioner=true in "multinode-378707"
	I0911 11:19:35.230917 2234986 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:19:35.230949 2234986 addons.go:69] Setting default-storageclass=true in profile "multinode-378707"
	I0911 11:19:35.230971 2234986 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-378707"
	I0911 11:19:35.230980 2234986 host.go:66] Checking if "multinode-378707" exists ...
	I0911 11:19:35.231366 2234986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:19:35.231278 2234986 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:19:35.231415 2234986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:19:35.231502 2234986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:19:35.231537 2234986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:19:35.232150 2234986 cert_rotation.go:137] Starting client certificate rotation controller
	I0911 11:19:35.232491 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0911 11:19:35.232508 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:35.232516 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:35.232522 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:35.247044 2234986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38431
	I0911 11:19:35.247342 2234986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35139
	I0911 11:19:35.247683 2234986 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:19:35.247761 2234986 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:19:35.248224 2234986 main.go:141] libmachine: Using API Version  1
	I0911 11:19:35.248239 2234986 main.go:141] libmachine: Using API Version  1
	I0911 11:19:35.248248 2234986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:19:35.248252 2234986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:19:35.248583 2234986 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:19:35.248583 2234986 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:19:35.248794 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetState
	I0911 11:19:35.249186 2234986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:19:35.249238 2234986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:19:35.250923 2234986 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:19:35.251145 2234986 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:19:35.251441 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/apis/storage.k8s.io/v1/storageclasses
	I0911 11:19:35.251453 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:35.251461 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:35.251467 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:35.251742 2234986 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0911 11:19:35.251760 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:35.251770 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:35.251779 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:35.251791 2234986 round_trippers.go:580]     Content-Length: 291
	I0911 11:19:35.251800 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:35 GMT
	I0911 11:19:35.251810 2234986 round_trippers.go:580]     Audit-Id: 68e0d582-0b08-4206-9f5b-60ca9f09827b
	I0911 11:19:35.251815 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:35.251821 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:35.255407 2234986 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"64cee900-5094-4f8d-89de-d10f65816cce","resourceVersion":"256","creationTimestamp":"2023-09-11T11:19:21Z"},"spec":{"replicas":2},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0911 11:19:35.255787 2234986 request.go:1212] Request Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"64cee900-5094-4f8d-89de-d10f65816cce","resourceVersion":"256","creationTimestamp":"2023-09-11T11:19:21Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0911 11:19:35.255835 2234986 round_trippers.go:463] PUT https://192.168.39.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0911 11:19:35.255840 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:35.255847 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:35.255856 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:35.255862 2234986 round_trippers.go:473]     Content-Type: application/json
	I0911 11:19:35.259377 2234986 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0911 11:19:35.259396 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:35.259402 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:35 GMT
	I0911 11:19:35.259408 2234986 round_trippers.go:580]     Audit-Id: cffad02f-2dd4-489c-9e8b-4edb468a065b
	I0911 11:19:35.259413 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:35.259419 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:35.259425 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:35.259430 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:35.259435 2234986 round_trippers.go:580]     Content-Length: 109
	I0911 11:19:35.263417 2234986 request.go:1212] Response Body: {"kind":"StorageClassList","apiVersion":"storage.k8s.io/v1","metadata":{"resourceVersion":"339"},"items":[]}
	I0911 11:19:35.263686 2234986 addons.go:231] Setting addon default-storageclass=true in "multinode-378707"
	I0911 11:19:35.263721 2234986 host.go:66] Checking if "multinode-378707" exists ...
	I0911 11:19:35.264047 2234986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:19:35.264076 2234986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:19:35.264684 2234986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40737
	I0911 11:19:35.265188 2234986 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:19:35.265705 2234986 main.go:141] libmachine: Using API Version  1
	I0911 11:19:35.265726 2234986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:19:35.266085 2234986 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:19:35.266275 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetState
	I0911 11:19:35.268035 2234986 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:19:35.270676 2234986 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:19:35.272117 2234986 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 11:19:35.272146 2234986 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 11:19:35.272172 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:35.275573 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:35.276068 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:35.276100 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:35.276269 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:19:35.276462 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:35.276650 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:19:35.276831 2234986 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:19:35.280581 2234986 round_trippers.go:574] Response Status: 200 OK in 24 milliseconds
	I0911 11:19:35.280607 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:35.280621 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:35.280647 2234986 round_trippers.go:580]     Content-Length: 291
	I0911 11:19:35.280661 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:35 GMT
	I0911 11:19:35.280669 2234986 round_trippers.go:580]     Audit-Id: 08c43227-69fc-4406-b758-cefc894919c1
	I0911 11:19:35.280676 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:35.280686 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:35.280699 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:35.280736 2234986 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"64cee900-5094-4f8d-89de-d10f65816cce","resourceVersion":"341","creationTimestamp":"2023-09-11T11:19:21Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0911 11:19:35.280940 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0911 11:19:35.280952 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:35.280962 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:35.280972 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:35.281002 2234986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44213
	I0911 11:19:35.281371 2234986 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:19:35.281821 2234986 main.go:141] libmachine: Using API Version  1
	I0911 11:19:35.281844 2234986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:19:35.282182 2234986 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:19:35.282644 2234986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:19:35.282671 2234986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:19:35.297987 2234986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43017
	I0911 11:19:35.298434 2234986 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:19:35.298947 2234986 main.go:141] libmachine: Using API Version  1
	I0911 11:19:35.298978 2234986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:19:35.299312 2234986 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:19:35.299505 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetState
	I0911 11:19:35.299812 2234986 round_trippers.go:574] Response Status: 200 OK in 18 milliseconds
	I0911 11:19:35.299832 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:35.299845 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:35.299858 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:35.299875 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:35.299884 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:35.299895 2234986 round_trippers.go:580]     Content-Length: 291
	I0911 11:19:35.299907 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:35 GMT
	I0911 11:19:35.299916 2234986 round_trippers.go:580]     Audit-Id: cca51687-40d4-4fe7-8f70-2681c291b229
	I0911 11:19:35.301333 2234986 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:19:35.301615 2234986 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 11:19:35.301634 2234986 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 11:19:35.301656 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:19:35.304754 2234986 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"64cee900-5094-4f8d-89de-d10f65816cce","resourceVersion":"341","creationTimestamp":"2023-09-11T11:19:21Z"},"spec":{"replicas":1},"status":{"replicas":0,"selector":"k8s-app=kube-dns"}}
	I0911 11:19:35.304907 2234986 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-378707" context rescaled to 1 replicas
	I0911 11:19:35.304956 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:35.304948 2234986 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 11:19:35.306831 2234986 out.go:177] * Verifying Kubernetes components...
	I0911 11:19:35.305455 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:19:35.305646 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:19:35.306878 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:19:35.308449 2234986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:19:35.307098 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:19:35.308712 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:19:35.308886 2234986 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:19:35.419443 2234986 command_runner.go:130] > apiVersion: v1
	I0911 11:19:35.419475 2234986 command_runner.go:130] > data:
	I0911 11:19:35.419482 2234986 command_runner.go:130] >   Corefile: |
	I0911 11:19:35.419488 2234986 command_runner.go:130] >     .:53 {
	I0911 11:19:35.419494 2234986 command_runner.go:130] >         errors
	I0911 11:19:35.419501 2234986 command_runner.go:130] >         health {
	I0911 11:19:35.419513 2234986 command_runner.go:130] >            lameduck 5s
	I0911 11:19:35.419520 2234986 command_runner.go:130] >         }
	I0911 11:19:35.419527 2234986 command_runner.go:130] >         ready
	I0911 11:19:35.419537 2234986 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0911 11:19:35.419543 2234986 command_runner.go:130] >            pods insecure
	I0911 11:19:35.419552 2234986 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0911 11:19:35.419560 2234986 command_runner.go:130] >            ttl 30
	I0911 11:19:35.419566 2234986 command_runner.go:130] >         }
	I0911 11:19:35.419573 2234986 command_runner.go:130] >         prometheus :9153
	I0911 11:19:35.419586 2234986 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0911 11:19:35.419593 2234986 command_runner.go:130] >            max_concurrent 1000
	I0911 11:19:35.419600 2234986 command_runner.go:130] >         }
	I0911 11:19:35.419607 2234986 command_runner.go:130] >         cache 30
	I0911 11:19:35.419614 2234986 command_runner.go:130] >         loop
	I0911 11:19:35.419624 2234986 command_runner.go:130] >         reload
	I0911 11:19:35.419629 2234986 command_runner.go:130] >         loadbalance
	I0911 11:19:35.419635 2234986 command_runner.go:130] >     }
	I0911 11:19:35.419639 2234986 command_runner.go:130] > kind: ConfigMap
	I0911 11:19:35.419643 2234986 command_runner.go:130] > metadata:
	I0911 11:19:35.419648 2234986 command_runner.go:130] >   creationTimestamp: "2023-09-11T11:19:21Z"
	I0911 11:19:35.419652 2234986 command_runner.go:130] >   name: coredns
	I0911 11:19:35.419656 2234986 command_runner.go:130] >   namespace: kube-system
	I0911 11:19:35.419662 2234986 command_runner.go:130] >   resourceVersion: "252"
	I0911 11:19:35.419669 2234986 command_runner.go:130] >   uid: f37a9dec-5b61-473f-80fb-18b2584b4b79
	I0911 11:19:35.419865 2234986 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 11:19:35.420228 2234986 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:19:35.420583 2234986 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:19:35.421007 2234986 node_ready.go:35] waiting up to 6m0s for node "multinode-378707" to be "Ready" ...
	I0911 11:19:35.421129 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:35.421139 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:35.421150 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:35.421164 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:35.425677 2234986 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:19:35.425708 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:35.425720 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:35 GMT
	I0911 11:19:35.425729 2234986 round_trippers.go:580]     Audit-Id: e2804a3c-ed8a-4643-9f2d-b3de44e03ae9
	I0911 11:19:35.425738 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:35.425745 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:35.425754 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:35.425762 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:35.425878 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"333","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:1
9:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 5989 chars]
	I0911 11:19:35.426692 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:35.426712 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:35.426724 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:35.426734 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:35.432764 2234986 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0911 11:19:35.432796 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:35.432810 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:35 GMT
	I0911 11:19:35.432851 2234986 round_trippers.go:580]     Audit-Id: d49a1567-b0fa-434d-b19e-98c2cbd656f3
	I0911 11:19:35.432860 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:35.432868 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:35.432880 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:35.432905 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:35.433087 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"333","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:1
9:18Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations [truncated 5989 chars]
	I0911 11:19:35.500203 2234986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 11:19:35.536085 2234986 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 11:19:35.934233 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:35.934261 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:35.934276 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:35.934283 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:36.169464 2234986 round_trippers.go:574] Response Status: 200 OK in 235 milliseconds
	I0911 11:19:36.169496 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:36.169504 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:36.169510 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:36.169516 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:36 GMT
	I0911 11:19:36.169521 2234986 round_trippers.go:580]     Audit-Id: 1ddb2cd2-c54a-4eb7-835c-4522a3661f20
	I0911 11:19:36.169526 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:36.169532 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:36.170279 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"360","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0911 11:19:36.366244 2234986 command_runner.go:130] > configmap/coredns replaced
	I0911 11:19:36.368609 2234986 start.go:901] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0911 11:19:36.433914 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:36.433939 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:36.433947 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:36.433955 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:36.442477 2234986 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0911 11:19:36.442504 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:36.442515 2234986 round_trippers.go:580]     Audit-Id: a4c57cf4-c932-4cf3-b74d-42ddc8d6e63e
	I0911 11:19:36.442523 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:36.442530 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:36.442537 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:36.442545 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:36.442553 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:36 GMT
	I0911 11:19:36.442743 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"360","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0911 11:19:36.506931 2234986 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0911 11:19:36.518101 2234986 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0911 11:19:36.531301 2234986 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0911 11:19:36.549244 2234986 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0911 11:19:36.564625 2234986 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0911 11:19:36.582575 2234986 command_runner.go:130] > pod/storage-provisioner created
	I0911 11:19:36.585247 2234986 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.084999369s)
	I0911 11:19:36.585270 2234986 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0911 11:19:36.585297 2234986 main.go:141] libmachine: Making call to close driver server
	I0911 11:19:36.585299 2234986 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.049184168s)
	I0911 11:19:36.585310 2234986 main.go:141] libmachine: (multinode-378707) Calling .Close
	I0911 11:19:36.585318 2234986 main.go:141] libmachine: Making call to close driver server
	I0911 11:19:36.585328 2234986 main.go:141] libmachine: (multinode-378707) Calling .Close
	I0911 11:19:36.585763 2234986 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:19:36.585800 2234986 main.go:141] libmachine: (multinode-378707) DBG | Closing plugin on server side
	I0911 11:19:36.585811 2234986 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:19:36.585762 2234986 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:19:36.585830 2234986 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:19:36.585847 2234986 main.go:141] libmachine: Making call to close driver server
	I0911 11:19:36.585861 2234986 main.go:141] libmachine: (multinode-378707) Calling .Close
	I0911 11:19:36.585892 2234986 main.go:141] libmachine: (multinode-378707) DBG | Closing plugin on server side
	I0911 11:19:36.585995 2234986 main.go:141] libmachine: Making call to close driver server
	I0911 11:19:36.586018 2234986 main.go:141] libmachine: (multinode-378707) Calling .Close
	I0911 11:19:36.586115 2234986 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:19:36.586140 2234986 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:19:36.586161 2234986 main.go:141] libmachine: (multinode-378707) DBG | Closing plugin on server side
	I0911 11:19:36.586224 2234986 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:19:36.586240 2234986 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:19:36.586256 2234986 main.go:141] libmachine: Making call to close driver server
	I0911 11:19:36.586269 2234986 main.go:141] libmachine: (multinode-378707) Calling .Close
	I0911 11:19:36.587628 2234986 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:19:36.587655 2234986 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:19:36.589569 2234986 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0911 11:19:36.591096 2234986 addons.go:502] enable addons completed in 1.360331973s: enabled=[storage-provisioner default-storageclass]
	I0911 11:19:36.933728 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:36.933755 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:36.933763 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:36.933770 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:36.937987 2234986 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:19:36.938013 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:36.938025 2234986 round_trippers.go:580]     Audit-Id: f156e09e-760f-4046-98bf-230db240d321
	I0911 11:19:36.938034 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:36.938042 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:36.938049 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:36.938056 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:36.938064 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:36 GMT
	I0911 11:19:36.938249 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"360","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0911 11:19:37.433944 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:37.433976 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:37.433989 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:37.434000 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:37.436596 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:37.436618 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:37.436625 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:37.436631 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:37.436637 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:37.436642 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:37 GMT
	I0911 11:19:37.436647 2234986 round_trippers.go:580]     Audit-Id: 2cf94823-e3e8-4e15-837a-33bce2a337a1
	I0911 11:19:37.436653 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:37.436829 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"360","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0911 11:19:37.437233 2234986 node_ready.go:58] node "multinode-378707" has status "Ready":"False"
	I0911 11:19:37.934630 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:37.934658 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:37.934666 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:37.934672 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:37.937986 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:37.938010 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:37.938018 2234986 round_trippers.go:580]     Audit-Id: 8e470108-9705-42fb-b2b9-47d98244ddbd
	I0911 11:19:37.938024 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:37.938029 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:37.938035 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:37.938040 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:37.938045 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:37 GMT
	I0911 11:19:37.938219 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"360","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0911 11:19:38.433825 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:38.433850 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:38.433858 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:38.433865 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:38.436516 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:38.436537 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:38.436546 2234986 round_trippers.go:580]     Audit-Id: d378b6f7-9889-473e-bc57-f5506d22aa58
	I0911 11:19:38.436554 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:38.436562 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:38.436570 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:38.436584 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:38.436593 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:38 GMT
	I0911 11:19:38.436831 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"360","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0911 11:19:38.934099 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:38.934122 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:38.934131 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:38.934137 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:38.937123 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:38.937143 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:38.937150 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:38.937159 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:38.937167 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:38.937176 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:38.937186 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:38 GMT
	I0911 11:19:38.937194 2234986 round_trippers.go:580]     Audit-Id: e5d0c5ae-34df-4335-b045-4357e59e470c
	I0911 11:19:38.937592 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"360","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0911 11:19:39.434084 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:39.434115 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:39.434124 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:39.434130 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:39.437395 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:39.437425 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:39.437435 2234986 round_trippers.go:580]     Audit-Id: db0e7b8f-2373-4738-99fe-efc88f9577c4
	I0911 11:19:39.437441 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:39.437446 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:39.437452 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:39.437457 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:39.437463 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:39 GMT
	I0911 11:19:39.437579 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"360","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0911 11:19:39.437909 2234986 node_ready.go:58] node "multinode-378707" has status "Ready":"False"
	I0911 11:19:39.934096 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:39.934121 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:39.934130 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:39.934138 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:39.937063 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:39.937089 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:39.937099 2234986 round_trippers.go:580]     Audit-Id: 52b6436c-5e57-4e5c-8fc5-870c6f2ebea6
	I0911 11:19:39.937107 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:39.937114 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:39.937121 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:39.937129 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:39.937138 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:39 GMT
	I0911 11:19:39.937346 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"360","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0911 11:19:40.433695 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:40.433721 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:40.433730 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:40.433737 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:40.436441 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:40.436462 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:40.436469 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:40.436475 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:40.436481 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:40.436486 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:40 GMT
	I0911 11:19:40.436491 2234986 round_trippers.go:580]     Audit-Id: 08aeb524-397f-4c93-a6d4-de7914a410f5
	I0911 11:19:40.436496 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:40.436656 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"360","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0911 11:19:40.934123 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:40.934153 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:40.934162 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:40.934168 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:40.937328 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:40.937355 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:40.937366 2234986 round_trippers.go:580]     Audit-Id: 967d4f01-cf55-46a7-9cd7-95fc02e7f559
	I0911 11:19:40.937376 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:40.937384 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:40.937394 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:40.937403 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:40.937418 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:40 GMT
	I0911 11:19:40.937565 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"360","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0911 11:19:41.434326 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:41.434354 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:41.434363 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:41.434370 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:41.437680 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:41.437708 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:41.437719 2234986 round_trippers.go:580]     Audit-Id: a6067465-46ab-4b5c-a5f4-9634083a12a6
	I0911 11:19:41.437729 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:41.437738 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:41.437747 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:41.437756 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:41.437765 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:41 GMT
	I0911 11:19:41.438015 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"360","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6093 chars]
	I0911 11:19:41.438467 2234986 node_ready.go:58] node "multinode-378707" has status "Ready":"False"
	I0911 11:19:41.934723 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:41.934749 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:41.934758 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:41.934764 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:41.937702 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:41.937731 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:41.937743 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:41 GMT
	I0911 11:19:41.937751 2234986 round_trippers.go:580]     Audit-Id: b10f7922-c026-4b36-aaad-2b0d20ba3cf1
	I0911 11:19:41.937758 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:41.937767 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:41.937775 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:41.937787 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:41.941328 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:19:41.941752 2234986 node_ready.go:49] node "multinode-378707" has status "Ready":"True"
	I0911 11:19:41.941772 2234986 node_ready.go:38] duration metric: took 6.520738777s waiting for node "multinode-378707" to be "Ready" ...
	I0911 11:19:41.941783 2234986 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:19:41.941879 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:19:41.941888 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:41.941902 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:41.941913 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:41.947002 2234986 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0911 11:19:41.947031 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:41.947042 2234986 round_trippers.go:580]     Audit-Id: 8ae1bbc8-59a6-46ba-b072-3b604b875c29
	I0911 11:19:41.947052 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:41.947060 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:41.947068 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:41.947080 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:41.947093 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:41 GMT
	I0911 11:19:41.949669 2234986 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"424"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"423","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53919 chars]
	I0911 11:19:41.953032 2234986 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:41.953146 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:19:41.953157 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:41.953165 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:41.953172 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:41.956884 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:41.956911 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:41.956921 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:41.956929 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:41 GMT
	I0911 11:19:41.956937 2234986 round_trippers.go:580]     Audit-Id: 49eebd7d-9807-4bcb-903f-e11f22cc4e58
	I0911 11:19:41.956945 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:41.956953 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:41.956962 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:41.957456 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"423","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0911 11:19:41.958021 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:41.958034 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:41.958064 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:41.958079 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:41.961568 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:41.961589 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:41.961596 2234986 round_trippers.go:580]     Audit-Id: 9b7504a6-4853-4760-b17f-7ecdf84cc7cf
	I0911 11:19:41.961601 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:41.961607 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:41.961612 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:41.961618 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:41.961623 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:41 GMT
	I0911 11:19:41.962149 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:19:41.962547 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:19:41.962568 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:41.962575 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:41.962582 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:41.965925 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:41.965950 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:41.965959 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:41.965967 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:41.965975 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:41.965982 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:41 GMT
	I0911 11:19:41.965990 2234986 round_trippers.go:580]     Audit-Id: 41a7bc37-4f35-43a3-89b9-1998956a2c91
	I0911 11:19:41.966001 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:41.966232 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"423","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0911 11:19:41.966794 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:41.966809 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:41.966832 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:41.966845 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:41.969475 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:41.969500 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:41.969510 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:41.969519 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:41.969528 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:41 GMT
	I0911 11:19:41.969536 2234986 round_trippers.go:580]     Audit-Id: 2d53bc02-1038-41ff-8381-2e10869c250b
	I0911 11:19:41.969544 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:41.969557 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:41.969722 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:19:42.470597 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:19:42.470629 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:42.470640 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:42.470650 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:42.474790 2234986 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:19:42.474815 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:42.474823 2234986 round_trippers.go:580]     Audit-Id: 1097d05c-278c-4b91-b834-9ffe616fa101
	I0911 11:19:42.474829 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:42.474835 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:42.474840 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:42.474845 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:42.474851 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:42 GMT
	I0911 11:19:42.475150 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"423","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0911 11:19:42.475651 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:42.475663 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:42.475671 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:42.475677 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:42.479398 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:42.479422 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:42.479433 2234986 round_trippers.go:580]     Audit-Id: 9ac1e5af-6a54-4010-a527-89e6a1abda70
	I0911 11:19:42.479442 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:42.479451 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:42.479462 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:42.479468 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:42.479473 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:42 GMT
	I0911 11:19:42.480886 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:19:42.970356 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:19:42.970381 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:42.970390 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:42.970396 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:42.973555 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:42.973583 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:42.973591 2234986 round_trippers.go:580]     Audit-Id: 0e0a5240-5f41-4394-8818-0f1779c04ed6
	I0911 11:19:42.973597 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:42.973602 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:42.973608 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:42.973617 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:42.973626 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:42 GMT
	I0911 11:19:42.973760 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"423","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6152 chars]
	I0911 11:19:42.974365 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:42.974383 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:42.974394 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:42.974405 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:42.976775 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:42.976798 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:42.976825 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:42.976835 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:42.976859 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:42.976871 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:42.976880 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:42 GMT
	I0911 11:19:42.976895 2234986 round_trippers.go:580]     Audit-Id: aa7729b4-1e8a-45c2-9ee5-00087aba4f7b
	I0911 11:19:42.977335 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:19:43.471170 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:19:43.471197 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:43.471205 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:43.471212 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:43.474073 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:43.474096 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:43.474103 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:43.474109 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:43.474114 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:43.474120 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:43 GMT
	I0911 11:19:43.474127 2234986 round_trippers.go:580]     Audit-Id: 1fef125b-a842-40a9-b161-9682e076cb95
	I0911 11:19:43.474135 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:43.474396 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"437","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0911 11:19:43.474899 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:43.474916 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:43.474923 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:43.474940 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:43.477244 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:43.477257 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:43.477263 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:43 GMT
	I0911 11:19:43.477268 2234986 round_trippers.go:580]     Audit-Id: 1c5cd038-6d9f-4124-b1f6-ddf80caf8e5a
	I0911 11:19:43.477274 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:43.477279 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:43.477284 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:43.477290 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:43.477682 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:19:43.478104 2234986 pod_ready.go:92] pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace has status "Ready":"True"
	I0911 11:19:43.478123 2234986 pod_ready.go:81] duration metric: took 1.525066362s waiting for pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:43.478137 2234986 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:43.478191 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-378707
	I0911 11:19:43.478199 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:43.478206 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:43.478212 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:43.480431 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:43.480445 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:43.480451 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:43.480457 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:43.480464 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:43.480473 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:43 GMT
	I0911 11:19:43.480482 2234986 round_trippers.go:580]     Audit-Id: 3d1adf46-a00b-4a5a-8011-4e10139cbf00
	I0911 11:19:43.480498 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:43.480675 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-378707","namespace":"kube-system","uid":"30882221-42a4-42a4-9911-63a8ff26c903","resourceVersion":"290","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.237:2379","kubernetes.io/config.hash":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.mirror":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.seen":"2023-09-11T11:19:21.954681050Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0911 11:19:43.481083 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:43.481096 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:43.481103 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:43.481109 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:43.483050 2234986 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:19:43.483071 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:43.483082 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:43.483091 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:43.483098 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:43 GMT
	I0911 11:19:43.483103 2234986 round_trippers.go:580]     Audit-Id: bd044c4d-157e-4aa3-9597-e51f83f00632
	I0911 11:19:43.483108 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:43.483114 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:43.483295 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:19:43.483626 2234986 pod_ready.go:92] pod "etcd-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:19:43.483641 2234986 pod_ready.go:81] duration metric: took 5.498588ms waiting for pod "etcd-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:43.483654 2234986 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:43.483707 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-378707
	I0911 11:19:43.483716 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:43.483723 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:43.483729 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:43.485656 2234986 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:19:43.485672 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:43.485679 2234986 round_trippers.go:580]     Audit-Id: f4bf4047-4f84-4a54-942e-4dbbf8a0736c
	I0911 11:19:43.485684 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:43.485689 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:43.485694 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:43.485700 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:43.485705 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:43 GMT
	I0911 11:19:43.486027 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-378707","namespace":"kube-system","uid":"6cc96039-3a17-4243-93b6-4bf3ed6f69a8","resourceVersion":"328","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.237:8443","kubernetes.io/config.hash":"4ac3958118ce3f6e7dda52fe654787ec","kubernetes.io/config.mirror":"4ac3958118ce3f6e7dda52fe654787ec","kubernetes.io/config.seen":"2023-09-11T11:19:21.954683933Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0911 11:19:43.486514 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:43.486531 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:43.486538 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:43.486547 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:43.488308 2234986 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:19:43.488324 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:43.488330 2234986 round_trippers.go:580]     Audit-Id: d19d3a34-b1c9-49b1-ba18-4d05dc951b63
	I0911 11:19:43.488336 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:43.488341 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:43.488346 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:43.488352 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:43.488357 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:43 GMT
	I0911 11:19:43.488505 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:19:43.488905 2234986 pod_ready.go:92] pod "kube-apiserver-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:19:43.488929 2234986 pod_ready.go:81] duration metric: took 5.264775ms waiting for pod "kube-apiserver-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:43.488944 2234986 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:43.489014 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-378707
	I0911 11:19:43.489025 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:43.489035 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:43.489045 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:43.490955 2234986 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:19:43.490971 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:43.490978 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:43.490983 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:43.490991 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:43 GMT
	I0911 11:19:43.490997 2234986 round_trippers.go:580]     Audit-Id: 2322cb96-7455-48af-ba35-53ccc374e1bd
	I0911 11:19:43.491005 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:43.491013 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:43.491190 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-378707","namespace":"kube-system","uid":"7bd2ecf1-1558-4680-9075-d30d989a0568","resourceVersion":"294","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee5490370c5fc8b73824fd7337130039","kubernetes.io/config.mirror":"ee5490370c5fc8b73824fd7337130039","kubernetes.io/config.seen":"2023-09-11T11:19:21.954684910Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0911 11:19:43.535019 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:43.535047 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:43.535056 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:43.535063 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:43.538090 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:43.538120 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:43.538132 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:43 GMT
	I0911 11:19:43.538152 2234986 round_trippers.go:580]     Audit-Id: 6d013a7a-9453-4407-bc51-c9358e43a555
	I0911 11:19:43.538160 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:43.538169 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:43.538177 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:43.538185 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:43.538397 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:19:43.538741 2234986 pod_ready.go:92] pod "kube-controller-manager-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:19:43.538758 2234986 pod_ready.go:81] duration metric: took 49.802857ms waiting for pod "kube-controller-manager-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:43.538770 2234986 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-snbc8" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:43.735216 2234986 request.go:629] Waited for 196.372839ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-snbc8
	I0911 11:19:43.735302 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-snbc8
	I0911 11:19:43.735306 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:43.735314 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:43.735321 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:43.738391 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:43.738414 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:43.738422 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:43.738428 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:43.738433 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:43 GMT
	I0911 11:19:43.738438 2234986 round_trippers.go:580]     Audit-Id: 7b147d22-4581-4c37-ae59-1d91e4538f57
	I0911 11:19:43.738444 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:43.738449 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:43.739265 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-snbc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"c3bb9995-3cd6-4433-a326-3da0a7f4aff3","resourceVersion":"408","creationTimestamp":"2023-09-11T11:19:35Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0911 11:19:43.935107 2234986 request.go:629] Waited for 195.38055ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:43.935170 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:43.935175 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:43.935184 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:43.935191 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:43.938054 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:43.938076 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:43.938087 2234986 round_trippers.go:580]     Audit-Id: 0a2b6e81-5f86-485c-b157-cc9aac2d6293
	I0911 11:19:43.938096 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:43.938105 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:43.938112 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:43.938120 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:43.938129 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:43 GMT
	I0911 11:19:43.938473 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:19:43.938804 2234986 pod_ready.go:92] pod "kube-proxy-snbc8" in "kube-system" namespace has status "Ready":"True"
	I0911 11:19:43.938817 2234986 pod_ready.go:81] duration metric: took 400.042281ms waiting for pod "kube-proxy-snbc8" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:43.938827 2234986 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:44.135249 2234986 request.go:629] Waited for 196.351664ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-378707
	I0911 11:19:44.135332 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-378707
	I0911 11:19:44.135337 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:44.135345 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:44.135351 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:44.138598 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:44.138627 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:44.138638 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:44.138647 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:44.138655 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:44.138667 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:44.138675 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:44 GMT
	I0911 11:19:44.138684 2234986 round_trippers.go:580]     Audit-Id: df415507-11cd-440e-9924-d34b654b61c5
	I0911 11:19:44.138886 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-378707","namespace":"kube-system","uid":"51055ddb-deff-4b5d-9a90-0cd2c9dc8aa7","resourceVersion":"295","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"47ac46ded21e848957a0f2d3767001da","kubernetes.io/config.mirror":"47ac46ded21e848957a0f2d3767001da","kubernetes.io/config.seen":"2023-09-11T11:19:21.954685589Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0911 11:19:44.335697 2234986 request.go:629] Waited for 196.417998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:44.335779 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:19:44.335785 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:44.335794 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:44.335813 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:44.340385 2234986 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:19:44.340409 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:44.340417 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:44.340423 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:44.340436 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:44.340444 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:44.340454 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:44 GMT
	I0911 11:19:44.340462 2234986 round_trippers.go:580]     Audit-Id: ffb53591-a02c-4c82-a0f3-bf0d44c2968b
	I0911 11:19:44.340633 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:19:44.341049 2234986 pod_ready.go:92] pod "kube-scheduler-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:19:44.341068 2234986 pod_ready.go:81] duration metric: took 402.234976ms waiting for pod "kube-scheduler-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:19:44.341080 2234986 pod_ready.go:38] duration metric: took 2.399285715s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:19:44.341099 2234986 api_server.go:52] waiting for apiserver process to appear ...
	I0911 11:19:44.341161 2234986 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:19:44.358749 2234986 command_runner.go:130] > 1108
	I0911 11:19:44.358814 2234986 api_server.go:72] duration metric: took 9.053834376s to wait for apiserver process to appear ...
	I0911 11:19:44.358824 2234986 api_server.go:88] waiting for apiserver healthz status ...
	I0911 11:19:44.358841 2234986 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0911 11:19:44.365493 2234986 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0911 11:19:44.365581 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/version
	I0911 11:19:44.365587 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:44.365599 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:44.365607 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:44.366739 2234986 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:19:44.366756 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:44.366763 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:44.366768 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:44.366778 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:44.366784 2234986 round_trippers.go:580]     Content-Length: 263
	I0911 11:19:44.366789 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:44 GMT
	I0911 11:19:44.366796 2234986 round_trippers.go:580]     Audit-Id: 39a2d131-8fe4-4347-97c2-d4be5423c19f
	I0911 11:19:44.366801 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:44.366944 2234986 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0911 11:19:44.367046 2234986 api_server.go:141] control plane version: v1.28.1
	I0911 11:19:44.367067 2234986 api_server.go:131] duration metric: took 8.236457ms to wait for apiserver health ...
	I0911 11:19:44.367080 2234986 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 11:19:44.535455 2234986 request.go:629] Waited for 168.256882ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:19:44.535529 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:19:44.535535 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:44.535548 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:44.535562 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:44.540311 2234986 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:19:44.540340 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:44.540348 2234986 round_trippers.go:580]     Audit-Id: 741e762d-a8ed-4c47-ab76-20e9518f6f78
	I0911 11:19:44.540354 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:44.540359 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:44.540364 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:44.540370 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:44.540379 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:44 GMT
	I0911 11:19:44.541386 2234986 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"442"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"437","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53997 chars]
	I0911 11:19:44.543050 2234986 system_pods.go:59] 8 kube-system pods found
	I0911 11:19:44.543071 2234986 system_pods.go:61] "coredns-5dd5756b68-fzpjk" [f72f6ba0-92a3-4108-a37f-e6ad5009c37c] Running
	I0911 11:19:44.543076 2234986 system_pods.go:61] "etcd-multinode-378707" [30882221-42a4-42a4-9911-63a8ff26c903] Running
	I0911 11:19:44.543082 2234986 system_pods.go:61] "kindnet-gxpnd" [e59da67c-e818-45db-bbcd-db99a4310bf1] Running
	I0911 11:19:44.543087 2234986 system_pods.go:61] "kube-apiserver-multinode-378707" [6cc96039-3a17-4243-93b6-4bf3ed6f69a8] Running
	I0911 11:19:44.543094 2234986 system_pods.go:61] "kube-controller-manager-multinode-378707" [7bd2ecf1-1558-4680-9075-d30d989a0568] Running
	I0911 11:19:44.543098 2234986 system_pods.go:61] "kube-proxy-snbc8" [c3bb9995-3cd6-4433-a326-3da0a7f4aff3] Running
	I0911 11:19:44.543102 2234986 system_pods.go:61] "kube-scheduler-multinode-378707" [51055ddb-deff-4b5d-9a90-0cd2c9dc8aa7] Running
	I0911 11:19:44.543106 2234986 system_pods.go:61] "storage-provisioner" [77e1a93d-fc34-4f05-8320-169bb6c93e46] Running
	I0911 11:19:44.543111 2234986 system_pods.go:74] duration metric: took 176.022553ms to wait for pod list to return data ...
	I0911 11:19:44.543125 2234986 default_sa.go:34] waiting for default service account to be created ...
	I0911 11:19:44.735605 2234986 request.go:629] Waited for 192.397669ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/default/serviceaccounts
	I0911 11:19:44.735673 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/default/serviceaccounts
	I0911 11:19:44.735677 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:44.735688 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:44.735695 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:44.738681 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:19:44.738700 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:44.738707 2234986 round_trippers.go:580]     Audit-Id: 072ef357-912c-42d5-97a3-f094eacd8114
	I0911 11:19:44.738730 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:44.738736 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:44.738742 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:44.738748 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:44.738755 2234986 round_trippers.go:580]     Content-Length: 261
	I0911 11:19:44.738763 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:44 GMT
	I0911 11:19:44.738797 2234986 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"86e8c023-176f-41fb-9ec0-ea8561fe161a","resourceVersion":"334","creationTimestamp":"2023-09-11T11:19:34Z"}}]}
	I0911 11:19:44.739022 2234986 default_sa.go:45] found service account: "default"
	I0911 11:19:44.739039 2234986 default_sa.go:55] duration metric: took 195.909692ms for default service account to be created ...
	I0911 11:19:44.739048 2234986 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 11:19:44.935516 2234986 request.go:629] Waited for 196.370917ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:19:44.935579 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:19:44.935584 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:44.935592 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:44.935598 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:44.940301 2234986 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:19:44.940330 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:44.940343 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:44 GMT
	I0911 11:19:44.940351 2234986 round_trippers.go:580]     Audit-Id: fe476f73-29d7-4a22-9848-306687e5f9ff
	I0911 11:19:44.940360 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:44.940368 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:44.940375 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:44.940383 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:44.941457 2234986 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"437","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 53997 chars]
	I0911 11:19:44.943842 2234986 system_pods.go:86] 8 kube-system pods found
	I0911 11:19:44.943866 2234986 system_pods.go:89] "coredns-5dd5756b68-fzpjk" [f72f6ba0-92a3-4108-a37f-e6ad5009c37c] Running
	I0911 11:19:44.943874 2234986 system_pods.go:89] "etcd-multinode-378707" [30882221-42a4-42a4-9911-63a8ff26c903] Running
	I0911 11:19:44.943881 2234986 system_pods.go:89] "kindnet-gxpnd" [e59da67c-e818-45db-bbcd-db99a4310bf1] Running
	I0911 11:19:44.943888 2234986 system_pods.go:89] "kube-apiserver-multinode-378707" [6cc96039-3a17-4243-93b6-4bf3ed6f69a8] Running
	I0911 11:19:44.943895 2234986 system_pods.go:89] "kube-controller-manager-multinode-378707" [7bd2ecf1-1558-4680-9075-d30d989a0568] Running
	I0911 11:19:44.943904 2234986 system_pods.go:89] "kube-proxy-snbc8" [c3bb9995-3cd6-4433-a326-3da0a7f4aff3] Running
	I0911 11:19:44.943911 2234986 system_pods.go:89] "kube-scheduler-multinode-378707" [51055ddb-deff-4b5d-9a90-0cd2c9dc8aa7] Running
	I0911 11:19:44.943922 2234986 system_pods.go:89] "storage-provisioner" [77e1a93d-fc34-4f05-8320-169bb6c93e46] Running
	I0911 11:19:44.943932 2234986 system_pods.go:126] duration metric: took 204.877928ms to wait for k8s-apps to be running ...
	I0911 11:19:44.943944 2234986 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:19:44.944003 2234986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:19:44.960502 2234986 system_svc.go:56] duration metric: took 16.549698ms WaitForService to wait for kubelet.
	I0911 11:19:44.960530 2234986 kubeadm.go:581] duration metric: took 9.655551332s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:19:44.960550 2234986 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:19:45.134921 2234986 request.go:629] Waited for 174.290829ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes
	I0911 11:19:45.135004 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes
	I0911 11:19:45.135011 2234986 round_trippers.go:469] Request Headers:
	I0911 11:19:45.135019 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:19:45.135026 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:19:45.138048 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:19:45.138089 2234986 round_trippers.go:577] Response Headers:
	I0911 11:19:45.138099 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:19:45 GMT
	I0911 11:19:45.138112 2234986 round_trippers.go:580]     Audit-Id: 1724090e-9ad4-44aa-a679-bb8eafd75b75
	I0911 11:19:45.138123 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:19:45.138133 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:19:45.138141 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:19:45.138152 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:19:45.138329 2234986 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"443"},"items":[{"metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 5952 chars]
	I0911 11:19:45.138855 2234986 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:19:45.138887 2234986 node_conditions.go:123] node cpu capacity is 2
	I0911 11:19:45.138902 2234986 node_conditions.go:105] duration metric: took 178.346111ms to run NodePressure ...
	I0911 11:19:45.138917 2234986 start.go:228] waiting for startup goroutines ...
	I0911 11:19:45.138928 2234986 start.go:233] waiting for cluster config update ...
	I0911 11:19:45.138944 2234986 start.go:242] writing updated cluster config ...
	I0911 11:19:45.141379 2234986 out.go:177] 
	I0911 11:19:45.142844 2234986 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:19:45.142940 2234986 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/config.json ...
	I0911 11:19:45.144683 2234986 out.go:177] * Starting worker node multinode-378707-m02 in cluster multinode-378707
	I0911 11:19:45.146072 2234986 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:19:45.146100 2234986 cache.go:57] Caching tarball of preloaded images
	I0911 11:19:45.146184 2234986 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 11:19:45.146196 2234986 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 11:19:45.146291 2234986 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/config.json ...
	I0911 11:19:45.146471 2234986 start.go:365] acquiring machines lock for multinode-378707-m02: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 11:19:45.146542 2234986 start.go:369] acquired machines lock for "multinode-378707-m02" in 45.853µs
	I0911 11:19:45.146568 2234986 start.go:93] Provisioning new machine with config: &{Name:multinode-378707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCon
fig:{KubernetesVersion:v1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:
true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name:m02 IP: Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0911 11:19:45.146656 2234986 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0911 11:19:45.148395 2234986 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 11:19:45.148490 2234986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:19:45.148521 2234986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:19:45.163477 2234986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46753
	I0911 11:19:45.163972 2234986 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:19:45.164511 2234986 main.go:141] libmachine: Using API Version  1
	I0911 11:19:45.164532 2234986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:19:45.164893 2234986 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:19:45.165107 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetMachineName
	I0911 11:19:45.165260 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:19:45.165427 2234986 start.go:159] libmachine.API.Create for "multinode-378707" (driver="kvm2")
	I0911 11:19:45.165455 2234986 client.go:168] LocalClient.Create starting
	I0911 11:19:45.165490 2234986 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem
	I0911 11:19:45.165522 2234986 main.go:141] libmachine: Decoding PEM data...
	I0911 11:19:45.165540 2234986 main.go:141] libmachine: Parsing certificate...
	I0911 11:19:45.165600 2234986 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem
	I0911 11:19:45.165618 2234986 main.go:141] libmachine: Decoding PEM data...
	I0911 11:19:45.165629 2234986 main.go:141] libmachine: Parsing certificate...
	I0911 11:19:45.165647 2234986 main.go:141] libmachine: Running pre-create checks...
	I0911 11:19:45.165655 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .PreCreateCheck
	I0911 11:19:45.165846 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetConfigRaw
	I0911 11:19:45.166272 2234986 main.go:141] libmachine: Creating machine...
	I0911 11:19:45.166288 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .Create
	I0911 11:19:45.166437 2234986 main.go:141] libmachine: (multinode-378707-m02) Creating KVM machine...
	I0911 11:19:45.167805 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found existing default KVM network
	I0911 11:19:45.168009 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found existing private KVM network mk-multinode-378707
	I0911 11:19:45.168162 2234986 main.go:141] libmachine: (multinode-378707-m02) Setting up store path in /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02 ...
	I0911 11:19:45.168189 2234986 main.go:141] libmachine: (multinode-378707-m02) Building disk image from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0911 11:19:45.168266 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:45.168144 2235361 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:19:45.168371 2234986 main.go:141] libmachine: (multinode-378707-m02) Downloading /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0911 11:19:45.413120 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:45.412962 2235361 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/id_rsa...
	I0911 11:19:45.691038 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:45.690895 2235361 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/multinode-378707-m02.rawdisk...
	I0911 11:19:45.691077 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Writing magic tar header
	I0911 11:19:45.691090 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Writing SSH key tar header
	I0911 11:19:45.691099 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:45.691011 2235361 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02 ...
	I0911 11:19:45.691111 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02
	I0911 11:19:45.691182 2234986 main.go:141] libmachine: (multinode-378707-m02) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02 (perms=drwx------)
	I0911 11:19:45.691203 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines
	I0911 11:19:45.691211 2234986 main.go:141] libmachine: (multinode-378707-m02) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines (perms=drwxr-xr-x)
	I0911 11:19:45.691226 2234986 main.go:141] libmachine: (multinode-378707-m02) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube (perms=drwxr-xr-x)
	I0911 11:19:45.691239 2234986 main.go:141] libmachine: (multinode-378707-m02) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273 (perms=drwxrwxr-x)
	I0911 11:19:45.691255 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:19:45.691274 2234986 main.go:141] libmachine: (multinode-378707-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0911 11:19:45.691285 2234986 main.go:141] libmachine: (multinode-378707-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0911 11:19:45.691301 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273
	I0911 11:19:45.691318 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0911 11:19:45.691334 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Checking permissions on dir: /home/jenkins
	I0911 11:19:45.691340 2234986 main.go:141] libmachine: (multinode-378707-m02) Creating domain...
	I0911 11:19:45.691392 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Checking permissions on dir: /home
	I0911 11:19:45.691425 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Skipping /home - not owner
	I0911 11:19:45.692270 2234986 main.go:141] libmachine: (multinode-378707-m02) define libvirt domain using xml: 
	I0911 11:19:45.692297 2234986 main.go:141] libmachine: (multinode-378707-m02) <domain type='kvm'>
	I0911 11:19:45.692311 2234986 main.go:141] libmachine: (multinode-378707-m02)   <name>multinode-378707-m02</name>
	I0911 11:19:45.692319 2234986 main.go:141] libmachine: (multinode-378707-m02)   <memory unit='MiB'>2200</memory>
	I0911 11:19:45.692327 2234986 main.go:141] libmachine: (multinode-378707-m02)   <vcpu>2</vcpu>
	I0911 11:19:45.692332 2234986 main.go:141] libmachine: (multinode-378707-m02)   <features>
	I0911 11:19:45.692339 2234986 main.go:141] libmachine: (multinode-378707-m02)     <acpi/>
	I0911 11:19:45.692344 2234986 main.go:141] libmachine: (multinode-378707-m02)     <apic/>
	I0911 11:19:45.692350 2234986 main.go:141] libmachine: (multinode-378707-m02)     <pae/>
	I0911 11:19:45.692356 2234986 main.go:141] libmachine: (multinode-378707-m02)     
	I0911 11:19:45.692365 2234986 main.go:141] libmachine: (multinode-378707-m02)   </features>
	I0911 11:19:45.692370 2234986 main.go:141] libmachine: (multinode-378707-m02)   <cpu mode='host-passthrough'>
	I0911 11:19:45.692379 2234986 main.go:141] libmachine: (multinode-378707-m02)   
	I0911 11:19:45.692384 2234986 main.go:141] libmachine: (multinode-378707-m02)   </cpu>
	I0911 11:19:45.692393 2234986 main.go:141] libmachine: (multinode-378707-m02)   <os>
	I0911 11:19:45.692399 2234986 main.go:141] libmachine: (multinode-378707-m02)     <type>hvm</type>
	I0911 11:19:45.692407 2234986 main.go:141] libmachine: (multinode-378707-m02)     <boot dev='cdrom'/>
	I0911 11:19:45.692413 2234986 main.go:141] libmachine: (multinode-378707-m02)     <boot dev='hd'/>
	I0911 11:19:45.692420 2234986 main.go:141] libmachine: (multinode-378707-m02)     <bootmenu enable='no'/>
	I0911 11:19:45.692425 2234986 main.go:141] libmachine: (multinode-378707-m02)   </os>
	I0911 11:19:45.692432 2234986 main.go:141] libmachine: (multinode-378707-m02)   <devices>
	I0911 11:19:45.692438 2234986 main.go:141] libmachine: (multinode-378707-m02)     <disk type='file' device='cdrom'>
	I0911 11:19:45.692448 2234986 main.go:141] libmachine: (multinode-378707-m02)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/boot2docker.iso'/>
	I0911 11:19:45.692454 2234986 main.go:141] libmachine: (multinode-378707-m02)       <target dev='hdc' bus='scsi'/>
	I0911 11:19:45.692460 2234986 main.go:141] libmachine: (multinode-378707-m02)       <readonly/>
	I0911 11:19:45.692466 2234986 main.go:141] libmachine: (multinode-378707-m02)     </disk>
	I0911 11:19:45.692473 2234986 main.go:141] libmachine: (multinode-378707-m02)     <disk type='file' device='disk'>
	I0911 11:19:45.692480 2234986 main.go:141] libmachine: (multinode-378707-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0911 11:19:45.692490 2234986 main.go:141] libmachine: (multinode-378707-m02)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/multinode-378707-m02.rawdisk'/>
	I0911 11:19:45.692496 2234986 main.go:141] libmachine: (multinode-378707-m02)       <target dev='hda' bus='virtio'/>
	I0911 11:19:45.692502 2234986 main.go:141] libmachine: (multinode-378707-m02)     </disk>
	I0911 11:19:45.692508 2234986 main.go:141] libmachine: (multinode-378707-m02)     <interface type='network'>
	I0911 11:19:45.692516 2234986 main.go:141] libmachine: (multinode-378707-m02)       <source network='mk-multinode-378707'/>
	I0911 11:19:45.692522 2234986 main.go:141] libmachine: (multinode-378707-m02)       <model type='virtio'/>
	I0911 11:19:45.692528 2234986 main.go:141] libmachine: (multinode-378707-m02)     </interface>
	I0911 11:19:45.692534 2234986 main.go:141] libmachine: (multinode-378707-m02)     <interface type='network'>
	I0911 11:19:45.692544 2234986 main.go:141] libmachine: (multinode-378707-m02)       <source network='default'/>
	I0911 11:19:45.692551 2234986 main.go:141] libmachine: (multinode-378707-m02)       <model type='virtio'/>
	I0911 11:19:45.692558 2234986 main.go:141] libmachine: (multinode-378707-m02)     </interface>
	I0911 11:19:45.692563 2234986 main.go:141] libmachine: (multinode-378707-m02)     <serial type='pty'>
	I0911 11:19:45.692571 2234986 main.go:141] libmachine: (multinode-378707-m02)       <target port='0'/>
	I0911 11:19:45.692579 2234986 main.go:141] libmachine: (multinode-378707-m02)     </serial>
	I0911 11:19:45.692589 2234986 main.go:141] libmachine: (multinode-378707-m02)     <console type='pty'>
	I0911 11:19:45.692609 2234986 main.go:141] libmachine: (multinode-378707-m02)       <target type='serial' port='0'/>
	I0911 11:19:45.692619 2234986 main.go:141] libmachine: (multinode-378707-m02)     </console>
	I0911 11:19:45.692629 2234986 main.go:141] libmachine: (multinode-378707-m02)     <rng model='virtio'>
	I0911 11:19:45.692643 2234986 main.go:141] libmachine: (multinode-378707-m02)       <backend model='random'>/dev/random</backend>
	I0911 11:19:45.692660 2234986 main.go:141] libmachine: (multinode-378707-m02)     </rng>
	I0911 11:19:45.692675 2234986 main.go:141] libmachine: (multinode-378707-m02)     
	I0911 11:19:45.692689 2234986 main.go:141] libmachine: (multinode-378707-m02)     
	I0911 11:19:45.692702 2234986 main.go:141] libmachine: (multinode-378707-m02)   </devices>
	I0911 11:19:45.692713 2234986 main.go:141] libmachine: (multinode-378707-m02) </domain>
	I0911 11:19:45.692724 2234986 main.go:141] libmachine: (multinode-378707-m02) 
	I0911 11:19:45.701265 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:70:db:ce in network default
	I0911 11:19:45.701977 2234986 main.go:141] libmachine: (multinode-378707-m02) Ensuring networks are active...
	I0911 11:19:45.702001 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:45.702842 2234986 main.go:141] libmachine: (multinode-378707-m02) Ensuring network default is active
	I0911 11:19:45.703142 2234986 main.go:141] libmachine: (multinode-378707-m02) Ensuring network mk-multinode-378707 is active
	I0911 11:19:45.703505 2234986 main.go:141] libmachine: (multinode-378707-m02) Getting domain xml...
	I0911 11:19:45.704255 2234986 main.go:141] libmachine: (multinode-378707-m02) Creating domain...
	I0911 11:19:46.958324 2234986 main.go:141] libmachine: (multinode-378707-m02) Waiting to get IP...
	I0911 11:19:46.959128 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:46.959533 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:19:46.959596 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:46.959530 2235361 retry.go:31] will retry after 248.904668ms: waiting for machine to come up
	I0911 11:19:47.209934 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:47.210343 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:19:47.210375 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:47.210318 2235361 retry.go:31] will retry after 323.115868ms: waiting for machine to come up
	I0911 11:19:47.534823 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:47.535267 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:19:47.535298 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:47.535211 2235361 retry.go:31] will retry after 300.438667ms: waiting for machine to come up
	I0911 11:19:47.837755 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:47.838139 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:19:47.838173 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:47.838086 2235361 retry.go:31] will retry after 392.900918ms: waiting for machine to come up
	I0911 11:19:48.232688 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:48.233138 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:19:48.233174 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:48.233083 2235361 retry.go:31] will retry after 629.587421ms: waiting for machine to come up
	I0911 11:19:48.864138 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:48.864686 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:19:48.864721 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:48.864636 2235361 retry.go:31] will retry after 853.172731ms: waiting for machine to come up
	I0911 11:19:49.719350 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:49.719763 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:19:49.719798 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:49.719699 2235361 retry.go:31] will retry after 1.02712144s: waiting for machine to come up
	I0911 11:19:50.748598 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:50.749048 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:19:50.749073 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:50.749025 2235361 retry.go:31] will retry after 1.139639553s: waiting for machine to come up
	I0911 11:19:51.890175 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:51.890555 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:19:51.890596 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:51.890514 2235361 retry.go:31] will retry after 1.562697247s: waiting for machine to come up
	I0911 11:19:53.455280 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:53.455786 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:19:53.455819 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:53.455714 2235361 retry.go:31] will retry after 2.172898634s: waiting for machine to come up
	I0911 11:19:55.630995 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:55.631504 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:19:55.631544 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:55.631455 2235361 retry.go:31] will retry after 2.316116349s: waiting for machine to come up
	I0911 11:19:57.950212 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:19:57.950675 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:19:57.950709 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:19:57.950585 2235361 retry.go:31] will retry after 3.49935928s: waiting for machine to come up
	I0911 11:20:01.452000 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:01.452485 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:20:01.452511 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:20:01.452431 2235361 retry.go:31] will retry after 4.276901159s: waiting for machine to come up
	I0911 11:20:05.730763 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:05.731195 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find current IP address of domain multinode-378707-m02 in network mk-multinode-378707
	I0911 11:20:05.731236 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | I0911 11:20:05.731150 2235361 retry.go:31] will retry after 3.514906709s: waiting for machine to come up
	I0911 11:20:09.249877 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.250386 2234986 main.go:141] libmachine: (multinode-378707-m02) Found IP for machine: 192.168.39.220
	I0911 11:20:09.250419 2234986 main.go:141] libmachine: (multinode-378707-m02) Reserving static IP address...
	I0911 11:20:09.250435 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has current primary IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.250925 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | unable to find host DHCP lease matching {name: "multinode-378707-m02", mac: "52:54:00:f1:8c:f4", ip: "192.168.39.220"} in network mk-multinode-378707
	I0911 11:20:09.332263 2234986 main.go:141] libmachine: (multinode-378707-m02) Reserved static IP address: 192.168.39.220
	I0911 11:20:09.332304 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Getting to WaitForSSH function...
	I0911 11:20:09.332324 2234986 main.go:141] libmachine: (multinode-378707-m02) Waiting for SSH to be available...
	I0911 11:20:09.335078 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.335505 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:minikube Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:09.335542 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.335560 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Using SSH client type: external
	I0911 11:20:09.335573 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/id_rsa (-rw-------)
	I0911 11:20:09.335612 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.220 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 11:20:09.335633 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | About to run SSH command:
	I0911 11:20:09.335653 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | exit 0
	I0911 11:20:09.425099 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | SSH cmd err, output: <nil>: 
	I0911 11:20:09.425348 2234986 main.go:141] libmachine: (multinode-378707-m02) KVM machine creation complete!
	I0911 11:20:09.425647 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetConfigRaw
	I0911 11:20:09.426232 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:20:09.426454 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:20:09.426613 2234986 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0911 11:20:09.426640 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetState
	I0911 11:20:09.427961 2234986 main.go:141] libmachine: Detecting operating system of created instance...
	I0911 11:20:09.428003 2234986 main.go:141] libmachine: Waiting for SSH to be available...
	I0911 11:20:09.428015 2234986 main.go:141] libmachine: Getting to WaitForSSH function...
	I0911 11:20:09.428022 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:20:09.430892 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.431602 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:09.431638 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.431820 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:20:09.432000 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:09.432175 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:09.432293 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:20:09.432448 2234986 main.go:141] libmachine: Using SSH client type: native
	I0911 11:20:09.432971 2234986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0911 11:20:09.432987 2234986 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0911 11:20:09.552124 2234986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:20:09.552157 2234986 main.go:141] libmachine: Detecting the provisioner...
	I0911 11:20:09.552166 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:20:09.555050 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.555620 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:09.555659 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.555913 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:20:09.556128 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:09.556317 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:09.556506 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:20:09.556721 2234986 main.go:141] libmachine: Using SSH client type: native
	I0911 11:20:09.557179 2234986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0911 11:20:09.557197 2234986 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0911 11:20:09.678204 2234986 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0911 11:20:09.678291 2234986 main.go:141] libmachine: found compatible host: buildroot
	I0911 11:20:09.678305 2234986 main.go:141] libmachine: Provisioning with buildroot...
	I0911 11:20:09.678318 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetMachineName
	I0911 11:20:09.678596 2234986 buildroot.go:166] provisioning hostname "multinode-378707-m02"
	I0911 11:20:09.678632 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetMachineName
	I0911 11:20:09.678860 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:20:09.681568 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.681984 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:09.682023 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.682136 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:20:09.682372 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:09.682541 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:09.682706 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:20:09.682880 2234986 main.go:141] libmachine: Using SSH client type: native
	I0911 11:20:09.683492 2234986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0911 11:20:09.683514 2234986 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-378707-m02 && echo "multinode-378707-m02" | sudo tee /etc/hostname
	I0911 11:20:09.813926 2234986 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-378707-m02
	
	I0911 11:20:09.813965 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:20:09.817114 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.817544 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:09.817577 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.817761 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:20:09.818009 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:09.818186 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:09.818412 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:20:09.818648 2234986 main.go:141] libmachine: Using SSH client type: native
	I0911 11:20:09.819063 2234986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0911 11:20:09.819081 2234986 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-378707-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-378707-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-378707-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:20:09.945145 2234986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:20:09.945194 2234986 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 11:20:09.945221 2234986 buildroot.go:174] setting up certificates
	I0911 11:20:09.945234 2234986 provision.go:83] configureAuth start
	I0911 11:20:09.945251 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetMachineName
	I0911 11:20:09.945616 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetIP
	I0911 11:20:09.948518 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.948936 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:09.948972 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.949164 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:20:09.951728 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.952052 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:09.952081 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:09.952236 2234986 provision.go:138] copyHostCerts
	I0911 11:20:09.952273 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:20:09.952314 2234986 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 11:20:09.952326 2234986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:20:09.952420 2234986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 11:20:09.952535 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:20:09.952558 2234986 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 11:20:09.952565 2234986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:20:09.952594 2234986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 11:20:09.952656 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:20:09.952672 2234986 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 11:20:09.952679 2234986 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:20:09.952700 2234986 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 11:20:09.952747 2234986 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.multinode-378707-m02 san=[192.168.39.220 192.168.39.220 localhost 127.0.0.1 minikube multinode-378707-m02]
	I0911 11:20:10.094200 2234986 provision.go:172] copyRemoteCerts
	I0911 11:20:10.094314 2234986 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:20:10.094359 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:20:10.097630 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.098011 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:10.098046 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.098301 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:20:10.098543 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:10.098746 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:20:10.098866 2234986 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/id_rsa Username:docker}
	I0911 11:20:10.188543 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0911 11:20:10.188636 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:20:10.213807 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0911 11:20:10.213901 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0911 11:20:10.237176 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0911 11:20:10.237264 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 11:20:10.259767 2234986 provision.go:86] duration metric: configureAuth took 314.49566ms
	I0911 11:20:10.259813 2234986 buildroot.go:189] setting minikube options for container-runtime
	I0911 11:20:10.260051 2234986 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:20:10.260158 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:20:10.263149 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.263527 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:10.263565 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.263747 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:20:10.263963 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:10.264212 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:10.264350 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:20:10.264539 2234986 main.go:141] libmachine: Using SSH client type: native
	I0911 11:20:10.264987 2234986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0911 11:20:10.265012 2234986 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:20:10.591361 2234986 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:20:10.591405 2234986 main.go:141] libmachine: Checking connection to Docker...
	I0911 11:20:10.591418 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetURL
	I0911 11:20:10.593113 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | Using libvirt version 6000000
	I0911 11:20:10.595595 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.596020 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:10.596069 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.596219 2234986 main.go:141] libmachine: Docker is up and running!
	I0911 11:20:10.596241 2234986 main.go:141] libmachine: Reticulating splines...
	I0911 11:20:10.596251 2234986 client.go:171] LocalClient.Create took 25.43078716s
	I0911 11:20:10.596286 2234986 start.go:167] duration metric: libmachine.API.Create for "multinode-378707" took 25.430856029s
	I0911 11:20:10.596301 2234986 start.go:300] post-start starting for "multinode-378707-m02" (driver="kvm2")
	I0911 11:20:10.596316 2234986 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:20:10.596343 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:20:10.596593 2234986 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:20:10.596619 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:20:10.598851 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.599241 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:10.599280 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.599457 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:20:10.599673 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:10.599845 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:20:10.600008 2234986 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/id_rsa Username:docker}
	I0911 11:20:10.690432 2234986 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:20:10.695404 2234986 command_runner.go:130] > NAME=Buildroot
	I0911 11:20:10.695437 2234986 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0911 11:20:10.695444 2234986 command_runner.go:130] > ID=buildroot
	I0911 11:20:10.695452 2234986 command_runner.go:130] > VERSION_ID=2021.02.12
	I0911 11:20:10.695460 2234986 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0911 11:20:10.695546 2234986 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 11:20:10.695574 2234986 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 11:20:10.695660 2234986 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 11:20:10.695768 2234986 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 11:20:10.695841 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> /etc/ssl/certs/22224712.pem
	I0911 11:20:10.695961 2234986 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:20:10.704968 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:20:10.731412 2234986 start.go:303] post-start completed in 135.091188ms
	I0911 11:20:10.731482 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetConfigRaw
	I0911 11:20:10.732231 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetIP
	I0911 11:20:10.735137 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.735580 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:10.735618 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.735954 2234986 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/config.json ...
	I0911 11:20:10.736165 2234986 start.go:128] duration metric: createHost completed in 25.589498183s
	I0911 11:20:10.736193 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:20:10.738657 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.738996 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:10.739025 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.739168 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:20:10.739411 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:10.739633 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:10.739789 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:20:10.739968 2234986 main.go:141] libmachine: Using SSH client type: native
	I0911 11:20:10.740366 2234986 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0911 11:20:10.740377 2234986 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 11:20:10.861818 2234986 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694431210.844014355
	
	I0911 11:20:10.861853 2234986 fix.go:206] guest clock: 1694431210.844014355
	I0911 11:20:10.861865 2234986 fix.go:219] Guest: 2023-09-11 11:20:10.844014355 +0000 UTC Remote: 2023-09-11 11:20:10.736178819 +0000 UTC m=+95.013530983 (delta=107.835536ms)
	I0911 11:20:10.861886 2234986 fix.go:190] guest clock delta is within tolerance: 107.835536ms
	I0911 11:20:10.861893 2234986 start.go:83] releasing machines lock for "multinode-378707-m02", held for 25.715339597s
	I0911 11:20:10.861922 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:20:10.862286 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetIP
	I0911 11:20:10.865978 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.866470 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:10.866512 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.868757 2234986 out.go:177] * Found network options:
	I0911 11:20:10.870673 2234986 out.go:177]   - NO_PROXY=192.168.39.237
	W0911 11:20:10.872059 2234986 proxy.go:119] fail to check proxy env: Error ip not in block
	I0911 11:20:10.872120 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:20:10.872933 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:20:10.873152 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:20:10.873253 2234986 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:20:10.873297 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	W0911 11:20:10.873389 2234986 proxy.go:119] fail to check proxy env: Error ip not in block
	I0911 11:20:10.873471 2234986 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:20:10.873500 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:20:10.876323 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.876596 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.876784 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:10.876859 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.876968 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:20:10.877109 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:10.877137 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:10.877199 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:10.877297 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:20:10.877387 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:20:10.877475 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:20:10.877540 2234986 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/id_rsa Username:docker}
	I0911 11:20:10.877612 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:20:10.877730 2234986 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/id_rsa Username:docker}
	I0911 11:20:10.998594 2234986 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0911 11:20:11.132074 2234986 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:20:11.138011 2234986 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0911 11:20:11.138347 2234986 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 11:20:11.138416 2234986 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:20:11.154766 2234986 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0911 11:20:11.155226 2234986 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 11:20:11.155257 2234986 start.go:466] detecting cgroup driver to use...
	I0911 11:20:11.155334 2234986 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:20:11.169965 2234986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:20:11.182620 2234986 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:20:11.182718 2234986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:20:11.196299 2234986 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:20:11.209045 2234986 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:20:11.313041 2234986 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0911 11:20:11.313123 2234986 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:20:11.328013 2234986 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0911 11:20:11.438329 2234986 docker.go:212] disabling docker service ...
	I0911 11:20:11.438423 2234986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:20:11.452633 2234986 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:20:11.466026 2234986 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0911 11:20:11.466145 2234986 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:20:11.480881 2234986 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0911 11:20:11.578553 2234986 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:20:11.679933 2234986 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0911 11:20:11.679968 2234986 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0911 11:20:11.680036 2234986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:20:11.694466 2234986 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:20:11.712657 2234986 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0911 11:20:11.712699 2234986 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:20:11.712778 2234986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:20:11.723446 2234986 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:20:11.723514 2234986 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:20:11.734549 2234986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:20:11.745658 2234986 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:20:11.756156 2234986 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:20:11.767256 2234986 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:20:11.776485 2234986 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 11:20:11.776555 2234986 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 11:20:11.776613 2234986 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 11:20:11.791934 2234986 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:20:11.801984 2234986 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:20:11.904237 2234986 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:20:12.082036 2234986 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:20:12.082128 2234986 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:20:12.086802 2234986 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0911 11:20:12.086832 2234986 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0911 11:20:12.086842 2234986 command_runner.go:130] > Device: 16h/22d	Inode: 726         Links: 1
	I0911 11:20:12.086859 2234986 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:20:12.086867 2234986 command_runner.go:130] > Access: 2023-09-11 11:20:12.050031285 +0000
	I0911 11:20:12.086875 2234986 command_runner.go:130] > Modify: 2023-09-11 11:20:12.050031285 +0000
	I0911 11:20:12.086884 2234986 command_runner.go:130] > Change: 2023-09-11 11:20:12.050031285 +0000
	I0911 11:20:12.086890 2234986 command_runner.go:130] >  Birth: -
	I0911 11:20:12.086949 2234986 start.go:534] Will wait 60s for crictl version
	I0911 11:20:12.087012 2234986 ssh_runner.go:195] Run: which crictl
	I0911 11:20:12.090607 2234986 command_runner.go:130] > /usr/bin/crictl
	I0911 11:20:12.090826 2234986 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:20:12.128630 2234986 command_runner.go:130] > Version:  0.1.0
	I0911 11:20:12.128659 2234986 command_runner.go:130] > RuntimeName:  cri-o
	I0911 11:20:12.128880 2234986 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0911 11:20:12.129231 2234986 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0911 11:20:12.130738 2234986 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 11:20:12.130807 2234986 ssh_runner.go:195] Run: crio --version
	I0911 11:20:12.185289 2234986 command_runner.go:130] > crio version 1.24.1
	I0911 11:20:12.185313 2234986 command_runner.go:130] > Version:          1.24.1
	I0911 11:20:12.185320 2234986 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0911 11:20:12.185324 2234986 command_runner.go:130] > GitTreeState:     dirty
	I0911 11:20:12.185337 2234986 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0911 11:20:12.185342 2234986 command_runner.go:130] > GoVersion:        go1.19.9
	I0911 11:20:12.185346 2234986 command_runner.go:130] > Compiler:         gc
	I0911 11:20:12.185350 2234986 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:20:12.185356 2234986 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:20:12.185363 2234986 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:20:12.185367 2234986 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:20:12.185371 2234986 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:20:12.186900 2234986 ssh_runner.go:195] Run: crio --version
	I0911 11:20:12.232586 2234986 command_runner.go:130] > crio version 1.24.1
	I0911 11:20:12.232625 2234986 command_runner.go:130] > Version:          1.24.1
	I0911 11:20:12.232632 2234986 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0911 11:20:12.232637 2234986 command_runner.go:130] > GitTreeState:     dirty
	I0911 11:20:12.232643 2234986 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0911 11:20:12.232648 2234986 command_runner.go:130] > GoVersion:        go1.19.9
	I0911 11:20:12.232652 2234986 command_runner.go:130] > Compiler:         gc
	I0911 11:20:12.232656 2234986 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:20:12.232661 2234986 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:20:12.232676 2234986 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:20:12.232680 2234986 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:20:12.232684 2234986 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:20:12.234923 2234986 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 11:20:12.236699 2234986 out.go:177]   - env NO_PROXY=192.168.39.237
	I0911 11:20:12.238176 2234986 main.go:141] libmachine: (multinode-378707-m02) Calling .GetIP
	I0911 11:20:12.240942 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:12.241377 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:20:12.241412 2234986 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:20:12.241607 2234986 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 11:20:12.245831 2234986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:20:12.258421 2234986 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707 for IP: 192.168.39.220
	I0911 11:20:12.258457 2234986 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:20:12.258639 2234986 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 11:20:12.258689 2234986 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 11:20:12.258711 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0911 11:20:12.258743 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0911 11:20:12.258760 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0911 11:20:12.258778 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0911 11:20:12.258886 2234986 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 11:20:12.258943 2234986 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 11:20:12.258959 2234986 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 11:20:12.258992 2234986 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:20:12.259029 2234986 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:20:12.259070 2234986 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 11:20:12.259130 2234986 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:20:12.259178 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem -> /usr/share/ca-certificates/2222471.pem
	I0911 11:20:12.259199 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> /usr/share/ca-certificates/22224712.pem
	I0911 11:20:12.259217 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:20:12.259596 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:20:12.283379 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 11:20:12.309433 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:20:12.332345 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:20:12.355218 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 11:20:12.378602 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 11:20:12.400373 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:20:12.422371 2234986 ssh_runner.go:195] Run: openssl version
	I0911 11:20:12.427926 2234986 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0911 11:20:12.427995 2234986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 11:20:12.439339 2234986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 11:20:12.443852 2234986 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:20:12.443979 2234986 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:20:12.444030 2234986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 11:20:12.449138 2234986 command_runner.go:130] > 51391683
	I0911 11:20:12.449512 2234986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 11:20:12.460175 2234986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 11:20:12.471113 2234986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 11:20:12.476042 2234986 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:20:12.476084 2234986 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:20:12.476143 2234986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 11:20:12.481872 2234986 command_runner.go:130] > 3ec20f2e
	I0911 11:20:12.482082 2234986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:20:12.493320 2234986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:20:12.504494 2234986 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:20:12.509002 2234986 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:20:12.509061 2234986 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:20:12.509120 2234986 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:20:12.514601 2234986 command_runner.go:130] > b5213941
	I0911 11:20:12.514815 2234986 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:20:12.525891 2234986 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:20:12.530015 2234986 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:20:12.530157 2234986 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:20:12.530278 2234986 ssh_runner.go:195] Run: crio config
	I0911 11:20:12.595581 2234986 command_runner.go:130] ! time="2023-09-11 11:20:12.581140444Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0911 11:20:12.595696 2234986 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0911 11:20:12.604159 2234986 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0911 11:20:12.604189 2234986 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0911 11:20:12.604196 2234986 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0911 11:20:12.604199 2234986 command_runner.go:130] > #
	I0911 11:20:12.604228 2234986 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0911 11:20:12.604236 2234986 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0911 11:20:12.604242 2234986 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0911 11:20:12.604250 2234986 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0911 11:20:12.604255 2234986 command_runner.go:130] > # reload'.
	I0911 11:20:12.604262 2234986 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0911 11:20:12.604274 2234986 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0911 11:20:12.604283 2234986 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0911 11:20:12.604291 2234986 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0911 11:20:12.604300 2234986 command_runner.go:130] > [crio]
	I0911 11:20:12.604310 2234986 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0911 11:20:12.604321 2234986 command_runner.go:130] > # containers images, in this directory.
	I0911 11:20:12.604329 2234986 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0911 11:20:12.604346 2234986 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0911 11:20:12.604357 2234986 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0911 11:20:12.604366 2234986 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0911 11:20:12.604379 2234986 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0911 11:20:12.604387 2234986 command_runner.go:130] > storage_driver = "overlay"
	I0911 11:20:12.604398 2234986 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0911 11:20:12.604407 2234986 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0911 11:20:12.604412 2234986 command_runner.go:130] > storage_option = [
	I0911 11:20:12.604419 2234986 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0911 11:20:12.604422 2234986 command_runner.go:130] > ]
	I0911 11:20:12.604431 2234986 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0911 11:20:12.604437 2234986 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0911 11:20:12.604444 2234986 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0911 11:20:12.604450 2234986 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0911 11:20:12.604458 2234986 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0911 11:20:12.604463 2234986 command_runner.go:130] > # always happen on a node reboot
	I0911 11:20:12.604470 2234986 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0911 11:20:12.604475 2234986 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0911 11:20:12.604483 2234986 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0911 11:20:12.604493 2234986 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0911 11:20:12.604500 2234986 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0911 11:20:12.604508 2234986 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0911 11:20:12.604518 2234986 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0911 11:20:12.604525 2234986 command_runner.go:130] > # internal_wipe = true
	I0911 11:20:12.604530 2234986 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0911 11:20:12.604539 2234986 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0911 11:20:12.604544 2234986 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0911 11:20:12.604549 2234986 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0911 11:20:12.604557 2234986 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0911 11:20:12.604562 2234986 command_runner.go:130] > [crio.api]
	I0911 11:20:12.604568 2234986 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0911 11:20:12.604572 2234986 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0911 11:20:12.604578 2234986 command_runner.go:130] > # IP address on which the stream server will listen.
	I0911 11:20:12.604582 2234986 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0911 11:20:12.604589 2234986 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0911 11:20:12.604596 2234986 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0911 11:20:12.604601 2234986 command_runner.go:130] > # stream_port = "0"
	I0911 11:20:12.604608 2234986 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0911 11:20:12.604612 2234986 command_runner.go:130] > # stream_enable_tls = false
	I0911 11:20:12.604620 2234986 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0911 11:20:12.604625 2234986 command_runner.go:130] > # stream_idle_timeout = ""
	I0911 11:20:12.604630 2234986 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0911 11:20:12.604637 2234986 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0911 11:20:12.604641 2234986 command_runner.go:130] > # minutes.
	I0911 11:20:12.604647 2234986 command_runner.go:130] > # stream_tls_cert = ""
	I0911 11:20:12.604653 2234986 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0911 11:20:12.604662 2234986 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0911 11:20:12.604666 2234986 command_runner.go:130] > # stream_tls_key = ""
	I0911 11:20:12.604672 2234986 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0911 11:20:12.604680 2234986 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0911 11:20:12.604685 2234986 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0911 11:20:12.604692 2234986 command_runner.go:130] > # stream_tls_ca = ""
	I0911 11:20:12.604699 2234986 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:20:12.604705 2234986 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0911 11:20:12.604712 2234986 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:20:12.604727 2234986 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0911 11:20:12.604747 2234986 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0911 11:20:12.604756 2234986 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0911 11:20:12.604759 2234986 command_runner.go:130] > [crio.runtime]
	I0911 11:20:12.604767 2234986 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0911 11:20:12.604773 2234986 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0911 11:20:12.604779 2234986 command_runner.go:130] > # "nofile=1024:2048"
	I0911 11:20:12.604785 2234986 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0911 11:20:12.604791 2234986 command_runner.go:130] > # default_ulimits = [
	I0911 11:20:12.604795 2234986 command_runner.go:130] > # ]
	I0911 11:20:12.604801 2234986 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0911 11:20:12.604808 2234986 command_runner.go:130] > # no_pivot = false
	I0911 11:20:12.604828 2234986 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0911 11:20:12.604843 2234986 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0911 11:20:12.604852 2234986 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0911 11:20:12.604860 2234986 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0911 11:20:12.604867 2234986 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0911 11:20:12.604874 2234986 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:20:12.604881 2234986 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0911 11:20:12.604885 2234986 command_runner.go:130] > # Cgroup setting for conmon
	I0911 11:20:12.604894 2234986 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0911 11:20:12.604900 2234986 command_runner.go:130] > conmon_cgroup = "pod"
	I0911 11:20:12.604906 2234986 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0911 11:20:12.604912 2234986 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0911 11:20:12.604918 2234986 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:20:12.604924 2234986 command_runner.go:130] > conmon_env = [
	I0911 11:20:12.604930 2234986 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0911 11:20:12.604936 2234986 command_runner.go:130] > ]
	I0911 11:20:12.604942 2234986 command_runner.go:130] > # Additional environment variables to set for all the
	I0911 11:20:12.604954 2234986 command_runner.go:130] > # containers. These are overridden if set in the
	I0911 11:20:12.604961 2234986 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0911 11:20:12.604966 2234986 command_runner.go:130] > # default_env = [
	I0911 11:20:12.604971 2234986 command_runner.go:130] > # ]
	I0911 11:20:12.604977 2234986 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0911 11:20:12.604983 2234986 command_runner.go:130] > # selinux = false
	I0911 11:20:12.604989 2234986 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0911 11:20:12.604997 2234986 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0911 11:20:12.605002 2234986 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0911 11:20:12.605007 2234986 command_runner.go:130] > # seccomp_profile = ""
	I0911 11:20:12.605013 2234986 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0911 11:20:12.605020 2234986 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0911 11:20:12.605027 2234986 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0911 11:20:12.605033 2234986 command_runner.go:130] > # which might increase security.
	I0911 11:20:12.605037 2234986 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0911 11:20:12.605044 2234986 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0911 11:20:12.605050 2234986 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0911 11:20:12.605060 2234986 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0911 11:20:12.605066 2234986 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0911 11:20:12.605073 2234986 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:20:12.605078 2234986 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0911 11:20:12.605085 2234986 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0911 11:20:12.605089 2234986 command_runner.go:130] > # the cgroup blockio controller.
	I0911 11:20:12.605096 2234986 command_runner.go:130] > # blockio_config_file = ""
	I0911 11:20:12.605105 2234986 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0911 11:20:12.605115 2234986 command_runner.go:130] > # irqbalance daemon.
	I0911 11:20:12.605124 2234986 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0911 11:20:12.605134 2234986 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0911 11:20:12.605142 2234986 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:20:12.605146 2234986 command_runner.go:130] > # rdt_config_file = ""
	I0911 11:20:12.605153 2234986 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0911 11:20:12.605157 2234986 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0911 11:20:12.605165 2234986 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0911 11:20:12.605169 2234986 command_runner.go:130] > # separate_pull_cgroup = ""
	I0911 11:20:12.605176 2234986 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0911 11:20:12.605183 2234986 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0911 11:20:12.605190 2234986 command_runner.go:130] > # will be added.
	I0911 11:20:12.605198 2234986 command_runner.go:130] > # default_capabilities = [
	I0911 11:20:12.605207 2234986 command_runner.go:130] > # 	"CHOWN",
	I0911 11:20:12.605215 2234986 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0911 11:20:12.605224 2234986 command_runner.go:130] > # 	"FSETID",
	I0911 11:20:12.605230 2234986 command_runner.go:130] > # 	"FOWNER",
	I0911 11:20:12.605234 2234986 command_runner.go:130] > # 	"SETGID",
	I0911 11:20:12.605238 2234986 command_runner.go:130] > # 	"SETUID",
	I0911 11:20:12.605245 2234986 command_runner.go:130] > # 	"SETPCAP",
	I0911 11:20:12.605249 2234986 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0911 11:20:12.605254 2234986 command_runner.go:130] > # 	"KILL",
	I0911 11:20:12.605257 2234986 command_runner.go:130] > # ]
	I0911 11:20:12.605266 2234986 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0911 11:20:12.605271 2234986 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:20:12.605276 2234986 command_runner.go:130] > # default_sysctls = [
	I0911 11:20:12.605284 2234986 command_runner.go:130] > # ]
	I0911 11:20:12.605293 2234986 command_runner.go:130] > # List of devices on the host that a
	I0911 11:20:12.605308 2234986 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0911 11:20:12.605318 2234986 command_runner.go:130] > # allowed_devices = [
	I0911 11:20:12.605323 2234986 command_runner.go:130] > # 	"/dev/fuse",
	I0911 11:20:12.605329 2234986 command_runner.go:130] > # ]
	I0911 11:20:12.605333 2234986 command_runner.go:130] > # List of additional devices. specified as
	I0911 11:20:12.605340 2234986 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0911 11:20:12.605348 2234986 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0911 11:20:12.605369 2234986 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:20:12.605380 2234986 command_runner.go:130] > # additional_devices = [
	I0911 11:20:12.605385 2234986 command_runner.go:130] > # ]
	I0911 11:20:12.605398 2234986 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0911 11:20:12.605406 2234986 command_runner.go:130] > # cdi_spec_dirs = [
	I0911 11:20:12.605416 2234986 command_runner.go:130] > # 	"/etc/cdi",
	I0911 11:20:12.605423 2234986 command_runner.go:130] > # 	"/var/run/cdi",
	I0911 11:20:12.605430 2234986 command_runner.go:130] > # ]
	I0911 11:20:12.605436 2234986 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0911 11:20:12.605445 2234986 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0911 11:20:12.605450 2234986 command_runner.go:130] > # Defaults to false.
	I0911 11:20:12.605462 2234986 command_runner.go:130] > # device_ownership_from_security_context = false
	I0911 11:20:12.605477 2234986 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0911 11:20:12.605490 2234986 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0911 11:20:12.605500 2234986 command_runner.go:130] > # hooks_dir = [
	I0911 11:20:12.605508 2234986 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0911 11:20:12.605516 2234986 command_runner.go:130] > # ]
	I0911 11:20:12.605525 2234986 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0911 11:20:12.605535 2234986 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0911 11:20:12.605548 2234986 command_runner.go:130] > # its default mounts from the following two files:
	I0911 11:20:12.605557 2234986 command_runner.go:130] > #
	I0911 11:20:12.605567 2234986 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0911 11:20:12.605581 2234986 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0911 11:20:12.605593 2234986 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0911 11:20:12.605601 2234986 command_runner.go:130] > #
	I0911 11:20:12.605610 2234986 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0911 11:20:12.605620 2234986 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0911 11:20:12.605633 2234986 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0911 11:20:12.605645 2234986 command_runner.go:130] > #      only add mounts it finds in this file.
	I0911 11:20:12.605654 2234986 command_runner.go:130] > #
	I0911 11:20:12.605662 2234986 command_runner.go:130] > # default_mounts_file = ""
	I0911 11:20:12.605671 2234986 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0911 11:20:12.605682 2234986 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0911 11:20:12.605692 2234986 command_runner.go:130] > pids_limit = 1024
	I0911 11:20:12.605698 2234986 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0911 11:20:12.605710 2234986 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0911 11:20:12.605735 2234986 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0911 11:20:12.605752 2234986 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0911 11:20:12.605762 2234986 command_runner.go:130] > # log_size_max = -1
	I0911 11:20:12.605774 2234986 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0911 11:20:12.605782 2234986 command_runner.go:130] > # log_to_journald = false
	I0911 11:20:12.605788 2234986 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0911 11:20:12.605801 2234986 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0911 11:20:12.605813 2234986 command_runner.go:130] > # Path to directory for container attach sockets.
	I0911 11:20:12.605825 2234986 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0911 11:20:12.605837 2234986 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0911 11:20:12.605850 2234986 command_runner.go:130] > # bind_mount_prefix = ""
	I0911 11:20:12.605862 2234986 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0911 11:20:12.605867 2234986 command_runner.go:130] > # read_only = false
	I0911 11:20:12.605878 2234986 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0911 11:20:12.605892 2234986 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0911 11:20:12.605903 2234986 command_runner.go:130] > # live configuration reload.
	I0911 11:20:12.605913 2234986 command_runner.go:130] > # log_level = "info"
	I0911 11:20:12.605923 2234986 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0911 11:20:12.605934 2234986 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:20:12.605944 2234986 command_runner.go:130] > # log_filter = ""
	I0911 11:20:12.605952 2234986 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0911 11:20:12.605962 2234986 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0911 11:20:12.605969 2234986 command_runner.go:130] > # separated by comma.
	I0911 11:20:12.605979 2234986 command_runner.go:130] > # uid_mappings = ""
	I0911 11:20:12.605990 2234986 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0911 11:20:12.606006 2234986 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0911 11:20:12.606016 2234986 command_runner.go:130] > # separated by comma.
	I0911 11:20:12.606027 2234986 command_runner.go:130] > # gid_mappings = ""
	I0911 11:20:12.606035 2234986 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0911 11:20:12.606046 2234986 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:20:12.606060 2234986 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:20:12.606071 2234986 command_runner.go:130] > # minimum_mappable_uid = -1
	I0911 11:20:12.606083 2234986 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0911 11:20:12.606097 2234986 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:20:12.606110 2234986 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:20:12.606119 2234986 command_runner.go:130] > # minimum_mappable_gid = -1
	I0911 11:20:12.606125 2234986 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0911 11:20:12.606137 2234986 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0911 11:20:12.606151 2234986 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0911 11:20:12.606159 2234986 command_runner.go:130] > # ctr_stop_timeout = 30
	I0911 11:20:12.606172 2234986 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0911 11:20:12.606185 2234986 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0911 11:20:12.606196 2234986 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0911 11:20:12.606205 2234986 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0911 11:20:12.606212 2234986 command_runner.go:130] > drop_infra_ctr = false
	I0911 11:20:12.606225 2234986 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0911 11:20:12.606238 2234986 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0911 11:20:12.606254 2234986 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0911 11:20:12.606264 2234986 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0911 11:20:12.606275 2234986 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0911 11:20:12.606286 2234986 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0911 11:20:12.606293 2234986 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0911 11:20:12.606303 2234986 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0911 11:20:12.606310 2234986 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0911 11:20:12.606328 2234986 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0911 11:20:12.606342 2234986 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0911 11:20:12.606356 2234986 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0911 11:20:12.606366 2234986 command_runner.go:130] > # default_runtime = "runc"
	I0911 11:20:12.606377 2234986 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0911 11:20:12.606388 2234986 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0911 11:20:12.606406 2234986 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0911 11:20:12.606418 2234986 command_runner.go:130] > # creation as a file is not desired either.
	I0911 11:20:12.606435 2234986 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0911 11:20:12.606449 2234986 command_runner.go:130] > # the hostname is being managed dynamically.
	I0911 11:20:12.606460 2234986 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0911 11:20:12.606467 2234986 command_runner.go:130] > # ]
	I0911 11:20:12.606473 2234986 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0911 11:20:12.606488 2234986 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0911 11:20:12.606503 2234986 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0911 11:20:12.606517 2234986 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0911 11:20:12.606525 2234986 command_runner.go:130] > #
	I0911 11:20:12.606534 2234986 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0911 11:20:12.606545 2234986 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0911 11:20:12.606551 2234986 command_runner.go:130] > #  runtime_type = "oci"
	I0911 11:20:12.606558 2234986 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0911 11:20:12.606566 2234986 command_runner.go:130] > #  privileged_without_host_devices = false
	I0911 11:20:12.606577 2234986 command_runner.go:130] > #  allowed_annotations = []
	I0911 11:20:12.606586 2234986 command_runner.go:130] > # Where:
	I0911 11:20:12.606597 2234986 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0911 11:20:12.606611 2234986 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0911 11:20:12.606625 2234986 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0911 11:20:12.606637 2234986 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0911 11:20:12.606643 2234986 command_runner.go:130] > #   in $PATH.
	I0911 11:20:12.606652 2234986 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0911 11:20:12.606664 2234986 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0911 11:20:12.606675 2234986 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0911 11:20:12.606684 2234986 command_runner.go:130] > #   state.
	I0911 11:20:12.606695 2234986 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0911 11:20:12.606708 2234986 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0911 11:20:12.606725 2234986 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0911 11:20:12.606734 2234986 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0911 11:20:12.606745 2234986 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0911 11:20:12.606760 2234986 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0911 11:20:12.606772 2234986 command_runner.go:130] > #   The currently recognized values are:
	I0911 11:20:12.606783 2234986 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0911 11:20:12.606798 2234986 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0911 11:20:12.606809 2234986 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0911 11:20:12.606819 2234986 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0911 11:20:12.606832 2234986 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0911 11:20:12.606847 2234986 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0911 11:20:12.606860 2234986 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0911 11:20:12.606872 2234986 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0911 11:20:12.606883 2234986 command_runner.go:130] > #   should be moved to the container's cgroup
	I0911 11:20:12.606893 2234986 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0911 11:20:12.606898 2234986 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0911 11:20:12.606906 2234986 command_runner.go:130] > runtime_type = "oci"
	I0911 11:20:12.606913 2234986 command_runner.go:130] > runtime_root = "/run/runc"
	I0911 11:20:12.606923 2234986 command_runner.go:130] > runtime_config_path = ""
	I0911 11:20:12.606930 2234986 command_runner.go:130] > monitor_path = ""
	I0911 11:20:12.606940 2234986 command_runner.go:130] > monitor_cgroup = ""
	I0911 11:20:12.606949 2234986 command_runner.go:130] > monitor_exec_cgroup = ""
	I0911 11:20:12.606963 2234986 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0911 11:20:12.606970 2234986 command_runner.go:130] > # running containers
	I0911 11:20:12.606980 2234986 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0911 11:20:12.606986 2234986 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0911 11:20:12.607023 2234986 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0911 11:20:12.607039 2234986 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0911 11:20:12.607047 2234986 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0911 11:20:12.607055 2234986 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0911 11:20:12.607066 2234986 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0911 11:20:12.607072 2234986 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0911 11:20:12.607079 2234986 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0911 11:20:12.607086 2234986 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0911 11:20:12.607101 2234986 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0911 11:20:12.607114 2234986 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0911 11:20:12.607129 2234986 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0911 11:20:12.607145 2234986 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0911 11:20:12.607157 2234986 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0911 11:20:12.607165 2234986 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0911 11:20:12.607185 2234986 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0911 11:20:12.607202 2234986 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0911 11:20:12.607215 2234986 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0911 11:20:12.607230 2234986 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0911 11:20:12.607238 2234986 command_runner.go:130] > # Example:
	I0911 11:20:12.607243 2234986 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0911 11:20:12.607250 2234986 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0911 11:20:12.607264 2234986 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0911 11:20:12.607275 2234986 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0911 11:20:12.607284 2234986 command_runner.go:130] > # cpuset = 0
	I0911 11:20:12.607291 2234986 command_runner.go:130] > # cpushares = "0-1"
	I0911 11:20:12.607299 2234986 command_runner.go:130] > # Where:
	I0911 11:20:12.607307 2234986 command_runner.go:130] > # The workload name is workload-type.
	I0911 11:20:12.607322 2234986 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0911 11:20:12.607330 2234986 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0911 11:20:12.607338 2234986 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0911 11:20:12.607356 2234986 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0911 11:20:12.607369 2234986 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0911 11:20:12.607375 2234986 command_runner.go:130] > # 
	I0911 11:20:12.607387 2234986 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0911 11:20:12.607395 2234986 command_runner.go:130] > #
	I0911 11:20:12.607406 2234986 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0911 11:20:12.607415 2234986 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0911 11:20:12.607425 2234986 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0911 11:20:12.607443 2234986 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0911 11:20:12.607456 2234986 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0911 11:20:12.607466 2234986 command_runner.go:130] > [crio.image]
	I0911 11:20:12.607476 2234986 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0911 11:20:12.607487 2234986 command_runner.go:130] > # default_transport = "docker://"
	I0911 11:20:12.607495 2234986 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0911 11:20:12.607506 2234986 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:20:12.607514 2234986 command_runner.go:130] > # global_auth_file = ""
	I0911 11:20:12.607525 2234986 command_runner.go:130] > # The image used to instantiate infra containers.
	I0911 11:20:12.607537 2234986 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:20:12.607548 2234986 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0911 11:20:12.607561 2234986 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0911 11:20:12.607574 2234986 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:20:12.607582 2234986 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:20:12.607587 2234986 command_runner.go:130] > # pause_image_auth_file = ""
	I0911 11:20:12.607597 2234986 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0911 11:20:12.607611 2234986 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0911 11:20:12.607621 2234986 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0911 11:20:12.607635 2234986 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0911 11:20:12.607645 2234986 command_runner.go:130] > # pause_command = "/pause"
	I0911 11:20:12.607657 2234986 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0911 11:20:12.607668 2234986 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0911 11:20:12.607677 2234986 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0911 11:20:12.607697 2234986 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0911 11:20:12.607710 2234986 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0911 11:20:12.607719 2234986 command_runner.go:130] > # signature_policy = ""
	I0911 11:20:12.607736 2234986 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0911 11:20:12.607749 2234986 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0911 11:20:12.607757 2234986 command_runner.go:130] > # changing them here.
	I0911 11:20:12.607761 2234986 command_runner.go:130] > # insecure_registries = [
	I0911 11:20:12.607770 2234986 command_runner.go:130] > # ]
	I0911 11:20:12.607787 2234986 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0911 11:20:12.607799 2234986 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0911 11:20:12.607810 2234986 command_runner.go:130] > # image_volumes = "mkdir"
	I0911 11:20:12.607822 2234986 command_runner.go:130] > # Temporary directory to use for storing big files
	I0911 11:20:12.607829 2234986 command_runner.go:130] > # big_files_temporary_dir = ""
	I0911 11:20:12.607840 2234986 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0911 11:20:12.607847 2234986 command_runner.go:130] > # CNI plugins.
	I0911 11:20:12.607854 2234986 command_runner.go:130] > [crio.network]
	I0911 11:20:12.607868 2234986 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0911 11:20:12.607881 2234986 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0911 11:20:12.607891 2234986 command_runner.go:130] > # cni_default_network = ""
	I0911 11:20:12.607904 2234986 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0911 11:20:12.607914 2234986 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0911 11:20:12.607925 2234986 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0911 11:20:12.607931 2234986 command_runner.go:130] > # plugin_dirs = [
	I0911 11:20:12.607937 2234986 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0911 11:20:12.607943 2234986 command_runner.go:130] > # ]
	I0911 11:20:12.607953 2234986 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0911 11:20:12.607963 2234986 command_runner.go:130] > [crio.metrics]
	I0911 11:20:12.607972 2234986 command_runner.go:130] > # Globally enable or disable metrics support.
	I0911 11:20:12.607982 2234986 command_runner.go:130] > enable_metrics = true
	I0911 11:20:12.607990 2234986 command_runner.go:130] > # Specify enabled metrics collectors.
	I0911 11:20:12.608001 2234986 command_runner.go:130] > # Per default all metrics are enabled.
	I0911 11:20:12.608012 2234986 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0911 11:20:12.608019 2234986 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0911 11:20:12.608033 2234986 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0911 11:20:12.608044 2234986 command_runner.go:130] > # metrics_collectors = [
	I0911 11:20:12.608053 2234986 command_runner.go:130] > # 	"operations",
	I0911 11:20:12.608062 2234986 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0911 11:20:12.608072 2234986 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0911 11:20:12.608082 2234986 command_runner.go:130] > # 	"operations_errors",
	I0911 11:20:12.608089 2234986 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0911 11:20:12.608098 2234986 command_runner.go:130] > # 	"image_pulls_by_name",
	I0911 11:20:12.608102 2234986 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0911 11:20:12.608112 2234986 command_runner.go:130] > # 	"image_pulls_failures",
	I0911 11:20:12.608120 2234986 command_runner.go:130] > # 	"image_pulls_successes",
	I0911 11:20:12.608131 2234986 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0911 11:20:12.608138 2234986 command_runner.go:130] > # 	"image_layer_reuse",
	I0911 11:20:12.608148 2234986 command_runner.go:130] > # 	"containers_oom_total",
	I0911 11:20:12.608155 2234986 command_runner.go:130] > # 	"containers_oom",
	I0911 11:20:12.608165 2234986 command_runner.go:130] > # 	"processes_defunct",
	I0911 11:20:12.608172 2234986 command_runner.go:130] > # 	"operations_total",
	I0911 11:20:12.608182 2234986 command_runner.go:130] > # 	"operations_latency_seconds",
	I0911 11:20:12.608187 2234986 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0911 11:20:12.608197 2234986 command_runner.go:130] > # 	"operations_errors_total",
	I0911 11:20:12.608208 2234986 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0911 11:20:12.608216 2234986 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0911 11:20:12.608227 2234986 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0911 11:20:12.608234 2234986 command_runner.go:130] > # 	"image_pulls_success_total",
	I0911 11:20:12.608245 2234986 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0911 11:20:12.608254 2234986 command_runner.go:130] > # 	"containers_oom_count_total",
	I0911 11:20:12.608263 2234986 command_runner.go:130] > # ]
	I0911 11:20:12.608269 2234986 command_runner.go:130] > # The port on which the metrics server will listen.
	I0911 11:20:12.608276 2234986 command_runner.go:130] > # metrics_port = 9090
	I0911 11:20:12.608285 2234986 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0911 11:20:12.608296 2234986 command_runner.go:130] > # metrics_socket = ""
	I0911 11:20:12.608305 2234986 command_runner.go:130] > # The certificate for the secure metrics server.
	I0911 11:20:12.608318 2234986 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0911 11:20:12.608329 2234986 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0911 11:20:12.608340 2234986 command_runner.go:130] > # certificate on any modification event.
	I0911 11:20:12.608349 2234986 command_runner.go:130] > # metrics_cert = ""
	I0911 11:20:12.608355 2234986 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0911 11:20:12.608363 2234986 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0911 11:20:12.608367 2234986 command_runner.go:130] > # metrics_key = ""
	I0911 11:20:12.608373 2234986 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0911 11:20:12.608377 2234986 command_runner.go:130] > [crio.tracing]
	I0911 11:20:12.608385 2234986 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0911 11:20:12.608396 2234986 command_runner.go:130] > # enable_tracing = false
	I0911 11:20:12.608409 2234986 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0911 11:20:12.608420 2234986 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0911 11:20:12.608429 2234986 command_runner.go:130] > # Number of samples to collect per million spans.
	I0911 11:20:12.608440 2234986 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0911 11:20:12.608452 2234986 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0911 11:20:12.608459 2234986 command_runner.go:130] > [crio.stats]
	I0911 11:20:12.608465 2234986 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0911 11:20:12.608472 2234986 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0911 11:20:12.608477 2234986 command_runner.go:130] > # stats_collection_period = 0
	I0911 11:20:12.608554 2234986 cni.go:84] Creating CNI manager for ""
	I0911 11:20:12.608566 2234986 cni.go:136] 2 nodes found, recommending kindnet
	I0911 11:20:12.608580 2234986 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:20:12.608605 2234986 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-378707 NodeName:multinode-378707-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:20:12.608733 2234986 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-378707-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:20:12.608807 2234986 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-378707-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:20:12.608901 2234986 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:20:12.620560 2234986 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.28.1': No such file or directory
	I0911 11:20:12.620633 2234986 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.1: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.1': No such file or directory
	
	Initiating transfer...
	I0911 11:20:12.620694 2234986 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.1
	I0911 11:20:12.631860 2234986 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/linux/amd64/v1.28.1/kubeadm
	I0911 11:20:12.631860 2234986 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/linux/amd64/v1.28.1/kubelet
	I0911 11:20:12.631857 2234986 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl.sha256
	I0911 11:20:12.632102 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/linux/amd64/v1.28.1/kubectl -> /var/lib/minikube/binaries/v1.28.1/kubectl
	I0911 11:20:12.632201 2234986 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubectl
	I0911 11:20:12.641578 2234986 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubectl': No such file or directory
	I0911 11:20:12.641638 2234986 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubectl': No such file or directory
	I0911 11:20:12.641667 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/linux/amd64/v1.28.1/kubectl --> /var/lib/minikube/binaries/v1.28.1/kubectl (49864704 bytes)
	I0911 11:20:13.201528 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/linux/amd64/v1.28.1/kubeadm -> /var/lib/minikube/binaries/v1.28.1/kubeadm
	I0911 11:20:13.201638 2234986 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubeadm
	I0911 11:20:13.207675 2234986 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubeadm': No such file or directory
	I0911 11:20:13.207763 2234986 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubeadm': No such file or directory
	I0911 11:20:13.207799 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/linux/amd64/v1.28.1/kubeadm --> /var/lib/minikube/binaries/v1.28.1/kubeadm (50749440 bytes)
	I0911 11:20:13.684087 2234986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:20:13.698554 2234986 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/linux/amd64/v1.28.1/kubelet -> /var/lib/minikube/binaries/v1.28.1/kubelet
	I0911 11:20:13.698669 2234986 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubelet
	I0911 11:20:13.703715 2234986 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubelet': No such file or directory
	I0911 11:20:13.703754 2234986 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.1/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.1/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.1/kubelet': No such file or directory
	I0911 11:20:13.703777 2234986 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/linux/amd64/v1.28.1/kubelet --> /var/lib/minikube/binaries/v1.28.1/kubelet (110764032 bytes)
	I0911 11:20:14.306442 2234986 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0911 11:20:14.315272 2234986 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0911 11:20:14.332693 2234986 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:20:14.349859 2234986 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I0911 11:20:14.354205 2234986 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:20:14.368111 2234986 host.go:66] Checking if "multinode-378707" exists ...
	I0911 11:20:14.368379 2234986 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:20:14.368514 2234986 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:20:14.368569 2234986 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:20:14.383545 2234986 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0911 11:20:14.384065 2234986 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:20:14.384560 2234986 main.go:141] libmachine: Using API Version  1
	I0911 11:20:14.384581 2234986 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:20:14.384956 2234986 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:20:14.385145 2234986 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:20:14.385297 2234986 start.go:301] JoinCluster: &{Name:multinode-378707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true Extra
Disks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:20:14.385419 2234986 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0911 11:20:14.385436 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:20:14.388521 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:20:14.389022 2234986 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:20:14.389054 2234986 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:20:14.389194 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:20:14.389388 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:20:14.389534 2234986 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:20:14.389673 2234986 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:20:14.583224 2234986 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token j092w3.7c94fed40qx9v7xd --discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 11:20:14.583645 2234986 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.220 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0911 11:20:14.583693 2234986 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j092w3.7c94fed40qx9v7xd --discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-378707-m02"
	I0911 11:20:14.629247 2234986 command_runner.go:130] > [preflight] Running pre-flight checks
	I0911 11:20:14.783003 2234986 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0911 11:20:14.783045 2234986 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0911 11:20:14.817143 2234986 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:20:14.817298 2234986 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:20:14.817321 2234986 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0911 11:20:14.935246 2234986 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0911 11:20:17.574276 2234986 command_runner.go:130] > This node has joined the cluster:
	I0911 11:20:17.574302 2234986 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0911 11:20:17.574315 2234986 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0911 11:20:17.574325 2234986 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0911 11:20:17.580105 2234986 command_runner.go:130] ! W0911 11:20:14.617754     818 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0911 11:20:17.580133 2234986 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 11:20:17.580152 2234986 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j092w3.7c94fed40qx9v7xd --discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-378707-m02": (2.996445255s)
	I0911 11:20:17.580176 2234986 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0911 11:20:17.850461 2234986 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0911 11:20:17.850512 2234986 start.go:303] JoinCluster complete in 3.465215234s
	I0911 11:20:17.850529 2234986 cni.go:84] Creating CNI manager for ""
	I0911 11:20:17.850536 2234986 cni.go:136] 2 nodes found, recommending kindnet
	I0911 11:20:17.850603 2234986 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0911 11:20:17.857063 2234986 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0911 11:20:17.857088 2234986 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0911 11:20:17.857095 2234986 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0911 11:20:17.857101 2234986 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:20:17.857107 2234986 command_runner.go:130] > Access: 2023-09-11 11:18:50.196948160 +0000
	I0911 11:20:17.857112 2234986 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0911 11:20:17.857117 2234986 command_runner.go:130] > Change: 2023-09-11 11:18:48.236948160 +0000
	I0911 11:20:17.857120 2234986 command_runner.go:130] >  Birth: -
	I0911 11:20:17.857594 2234986 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0911 11:20:17.857606 2234986 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0911 11:20:17.880189 2234986 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0911 11:20:18.188697 2234986 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0911 11:20:18.193157 2234986 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0911 11:20:18.195855 2234986 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0911 11:20:18.209975 2234986 command_runner.go:130] > daemonset.apps/kindnet configured
	I0911 11:20:18.213401 2234986 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:20:18.213678 2234986 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:20:18.214118 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0911 11:20:18.214134 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:18.214143 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:18.214149 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:18.217571 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:18.217595 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:18.217603 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:18.217610 2234986 round_trippers.go:580]     Content-Length: 291
	I0911 11:20:18.217615 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:18 GMT
	I0911 11:20:18.217624 2234986 round_trippers.go:580]     Audit-Id: 876fa23d-3821-42d7-9dd2-50f4e03c75ea
	I0911 11:20:18.217637 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:18.217647 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:18.217659 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:18.217691 2234986 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"64cee900-5094-4f8d-89de-d10f65816cce","resourceVersion":"442","creationTimestamp":"2023-09-11T11:19:21Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0911 11:20:18.217803 2234986 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-378707" context rescaled to 1 replicas
	I0911 11:20:18.217840 2234986 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.220 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0911 11:20:18.220827 2234986 out.go:177] * Verifying Kubernetes components...
	I0911 11:20:18.222670 2234986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:20:18.244161 2234986 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:20:18.244387 2234986 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:20:18.244723 2234986 node_ready.go:35] waiting up to 6m0s for node "multinode-378707-m02" to be "Ready" ...
	I0911 11:20:18.244801 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:18.244808 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:18.244837 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:18.244852 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:18.247825 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:18.247853 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:18.247864 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:18.247872 2234986 round_trippers.go:580]     Content-Length: 3942
	I0911 11:20:18.247879 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:18 GMT
	I0911 11:20:18.247886 2234986 round_trippers.go:580]     Audit-Id: e320395f-2ce8-46b2-990a-c0da5f399542
	I0911 11:20:18.247897 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:18.247908 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:18.247919 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:18.248028 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"489","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2918 chars]
	I0911 11:20:18.248443 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:18.248463 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:18.248472 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:18.248479 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:18.250558 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:18.250585 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:18.250595 2234986 round_trippers.go:580]     Audit-Id: 37886bf1-c2b1-440a-b2d8-725b73c792d8
	I0911 11:20:18.250603 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:18.250612 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:18.250621 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:18.250631 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:18.250647 2234986 round_trippers.go:580]     Content-Length: 3942
	I0911 11:20:18.250656 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:18 GMT
	I0911 11:20:18.250749 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"489","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2918 chars]
	I0911 11:20:18.751862 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:18.751887 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:18.751896 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:18.751903 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:18.755219 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:18.755244 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:18.755252 2234986 round_trippers.go:580]     Content-Length: 3942
	I0911 11:20:18.755258 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:18 GMT
	I0911 11:20:18.755268 2234986 round_trippers.go:580]     Audit-Id: 9c268a89-b298-4000-accc-d779055cfc27
	I0911 11:20:18.755276 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:18.755284 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:18.755293 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:18.755305 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:18.755405 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"489","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2918 chars]
	I0911 11:20:19.251983 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:19.252008 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:19.252016 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:19.252022 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:19.254725 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:19.254755 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:19.254765 2234986 round_trippers.go:580]     Audit-Id: aa36be68-fa39-47a7-9ecf-21014fb85125
	I0911 11:20:19.254779 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:19.254787 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:19.254796 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:19.254808 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:19.254816 2234986 round_trippers.go:580]     Content-Length: 3942
	I0911 11:20:19.254826 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:19 GMT
	I0911 11:20:19.254938 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"489","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2918 chars]
	I0911 11:20:19.752075 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:19.752111 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:19.752127 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:19.752137 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:19.755767 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:19.755792 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:19.755802 2234986 round_trippers.go:580]     Audit-Id: 5658076b-2953-442c-afea-4f47ad22f85a
	I0911 11:20:19.755810 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:19.755817 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:19.755825 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:19.755834 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:19.755841 2234986 round_trippers.go:580]     Content-Length: 3942
	I0911 11:20:19.755849 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:19 GMT
	I0911 11:20:19.755908 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"489","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kube-controller-manager","operation":"Update","apiVersi [truncated 2918 chars]
	I0911 11:20:20.252319 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:20.252356 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:20.252370 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:20.252381 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:20.255777 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:20.255806 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:20.255819 2234986 round_trippers.go:580]     Audit-Id: 322e7f0b-c7fc-49cc-a276-10a620463db0
	I0911 11:20:20.255828 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:20.255836 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:20.255845 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:20.255864 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:20.255885 2234986 round_trippers.go:580]     Content-Length: 4051
	I0911 11:20:20.255893 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:20 GMT
	I0911 11:20:20.256021 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"499","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3027 chars]
	I0911 11:20:20.256404 2234986 node_ready.go:58] node "multinode-378707-m02" has status "Ready":"False"
	I0911 11:20:20.751358 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:20.751385 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:20.751394 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:20.751400 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:20.754918 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:20.754944 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:20.754952 2234986 round_trippers.go:580]     Audit-Id: 9ec0dbd1-c7c1-41c7-936e-050e4edd585b
	I0911 11:20:20.754958 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:20.754963 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:20.754968 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:20.754976 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:20.754982 2234986 round_trippers.go:580]     Content-Length: 4051
	I0911 11:20:20.754988 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:20 GMT
	I0911 11:20:20.755081 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"499","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3027 chars]
	I0911 11:20:21.252192 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:21.252216 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:21.252225 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:21.252232 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:21.255706 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:21.255735 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:21.255746 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:21.255756 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:21.255764 2234986 round_trippers.go:580]     Content-Length: 4051
	I0911 11:20:21.255771 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:21 GMT
	I0911 11:20:21.255777 2234986 round_trippers.go:580]     Audit-Id: 657a0164-9c08-45c8-8077-123e1e226306
	I0911 11:20:21.255782 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:21.255788 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:21.255867 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"499","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3027 chars]
	I0911 11:20:21.751430 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:21.751454 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:21.751463 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:21.751469 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:21.754792 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:21.754823 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:21.754835 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:21.754844 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:21.754853 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:21.754884 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:21.754897 2234986 round_trippers.go:580]     Content-Length: 4051
	I0911 11:20:21.754906 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:21 GMT
	I0911 11:20:21.754915 2234986 round_trippers.go:580]     Audit-Id: a355c146-082a-4407-a2ba-86c49ad7a6bb
	I0911 11:20:21.755022 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"499","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3027 chars]
	I0911 11:20:22.251427 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:22.251451 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:22.251459 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:22.251466 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:22.255917 2234986 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:20:22.255945 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:22.255953 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:22 GMT
	I0911 11:20:22.255959 2234986 round_trippers.go:580]     Audit-Id: 0e49d53f-144f-4978-9166-7fcf0a448042
	I0911 11:20:22.255965 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:22.255971 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:22.255977 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:22.255991 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:22.255999 2234986 round_trippers.go:580]     Content-Length: 4051
	I0911 11:20:22.256101 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"499","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3027 chars]
	I0911 11:20:22.751339 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:22.751367 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:22.751376 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:22.751382 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:22.754732 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:22.754766 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:22.754776 2234986 round_trippers.go:580]     Audit-Id: f4b45488-34ad-45e4-abdf-6eb2910b6915
	I0911 11:20:22.754785 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:22.754794 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:22.754801 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:22.754810 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:22.754822 2234986 round_trippers.go:580]     Content-Length: 4051
	I0911 11:20:22.754834 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:22 GMT
	I0911 11:20:22.754938 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"499","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3027 chars]
	I0911 11:20:22.755295 2234986 node_ready.go:58] node "multinode-378707-m02" has status "Ready":"False"
	I0911 11:20:23.251467 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:23.251498 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:23.251510 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:23.251522 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:23.255603 2234986 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:20:23.255635 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:23.255648 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:23.255656 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:23.255665 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:23.255675 2234986 round_trippers.go:580]     Content-Length: 4051
	I0911 11:20:23.255685 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:23 GMT
	I0911 11:20:23.255694 2234986 round_trippers.go:580]     Audit-Id: 56b464e0-30b2-47fb-b16a-96e9c4af135a
	I0911 11:20:23.255703 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:23.255784 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"499","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3027 chars]
	I0911 11:20:23.751268 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:23.751290 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:23.751301 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:23.751323 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:23.754641 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:23.754674 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:23.754687 2234986 round_trippers.go:580]     Audit-Id: b93d12ca-4179-431d-9976-78f71514f9c6
	I0911 11:20:23.754695 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:23.754704 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:23.754713 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:23.754732 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:23.754740 2234986 round_trippers.go:580]     Content-Length: 4051
	I0911 11:20:23.754747 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:23 GMT
	I0911 11:20:23.754854 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"499","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3027 chars]
	I0911 11:20:24.251324 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:24.251356 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:24.251366 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:24.251372 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:24.254154 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:24.254184 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:24.254196 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:24.254204 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:24.254213 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:24.254222 2234986 round_trippers.go:580]     Content-Length: 4051
	I0911 11:20:24.254233 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:24 GMT
	I0911 11:20:24.254245 2234986 round_trippers.go:580]     Audit-Id: c4f150dd-21ff-4657-b26b-c01bf7de1aa4
	I0911 11:20:24.254256 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:24.254364 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"499","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3027 chars]
	I0911 11:20:24.751809 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:24.751836 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:24.751844 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:24.751850 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:24.755032 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:24.755066 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:24.755075 2234986 round_trippers.go:580]     Audit-Id: ea9ce163-d628-4ec5-9549-9fb908c338c3
	I0911 11:20:24.755081 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:24.755087 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:24.755092 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:24.755098 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:24.755105 2234986 round_trippers.go:580]     Content-Length: 4051
	I0911 11:20:24.755111 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:24 GMT
	I0911 11:20:24.755208 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"499","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3027 chars]
	I0911 11:20:24.755529 2234986 node_ready.go:58] node "multinode-378707-m02" has status "Ready":"False"
	I0911 11:20:25.251457 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:25.251479 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.251488 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.251496 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.254421 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:25.254443 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.254450 2234986 round_trippers.go:580]     Audit-Id: 521f46df-6b02-4a95-ac0c-24baad70f41d
	I0911 11:20:25.254457 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.254462 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.254473 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.254479 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.254484 2234986 round_trippers.go:580]     Content-Length: 3726
	I0911 11:20:25.254491 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.254570 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"519","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2702 chars]
	I0911 11:20:25.254887 2234986 node_ready.go:49] node "multinode-378707-m02" has status "Ready":"True"
	I0911 11:20:25.254906 2234986 node_ready.go:38] duration metric: took 7.010162489s waiting for node "multinode-378707-m02" to be "Ready" ...
	I0911 11:20:25.254914 2234986 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:20:25.254977 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:20:25.254988 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.254995 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.255001 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.258879 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:25.258908 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.258919 2234986 round_trippers.go:580]     Audit-Id: ba1816dc-18a3-4364-9d3f-3d49d17c1630
	I0911 11:20:25.258927 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.258935 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.258944 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.258952 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.258960 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.260568 2234986 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"519"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"437","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 67374 chars]
	I0911 11:20:25.262910 2234986 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:25.262999 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:20:25.263008 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.263016 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.263024 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.265450 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:25.265468 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.265474 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.265482 2234986 round_trippers.go:580]     Audit-Id: dd791a2c-ae9e-45e9-9699-607cf145efde
	I0911 11:20:25.265488 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.265493 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.265498 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.265504 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.265704 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"437","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0911 11:20:25.266194 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:20:25.266217 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.266276 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.266292 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.268718 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:25.268736 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.268743 2234986 round_trippers.go:580]     Audit-Id: 4c95feb0-457b-428e-9a59-3b9829f30df1
	I0911 11:20:25.268749 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.268754 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.268760 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.268765 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.268770 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.268968 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:20:25.269402 2234986 pod_ready.go:92] pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace has status "Ready":"True"
	I0911 11:20:25.269417 2234986 pod_ready.go:81] duration metric: took 6.483208ms waiting for pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:25.269429 2234986 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:25.269494 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-378707
	I0911 11:20:25.269499 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.269506 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.269512 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.271788 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:25.271804 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.271810 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.271815 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.271821 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.271826 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.271831 2234986 round_trippers.go:580]     Audit-Id: 961369b5-2e93-40e5-bed2-3522d830c05b
	I0911 11:20:25.271837 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.272218 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-378707","namespace":"kube-system","uid":"30882221-42a4-42a4-9911-63a8ff26c903","resourceVersion":"290","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.237:2379","kubernetes.io/config.hash":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.mirror":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.seen":"2023-09-11T11:19:21.954681050Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0911 11:20:25.272673 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:20:25.272688 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.272695 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.272701 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.274576 2234986 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:20:25.274595 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.274603 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.274611 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.274620 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.274629 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.274638 2234986 round_trippers.go:580]     Audit-Id: 91afc6fe-ac35-4414-8de9-a55b7da07a6f
	I0911 11:20:25.274645 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.274875 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:20:25.275177 2234986 pod_ready.go:92] pod "etcd-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:20:25.275191 2234986 pod_ready.go:81] duration metric: took 5.755153ms waiting for pod "etcd-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:25.275205 2234986 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:25.275259 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-378707
	I0911 11:20:25.275266 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.275284 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.275293 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.277569 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:25.277593 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.277603 2234986 round_trippers.go:580]     Audit-Id: e3e23570-fdf4-408c-a40b-4d8c7f4a3c67
	I0911 11:20:25.277611 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.277619 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.277626 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.277635 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.277642 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.277812 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-378707","namespace":"kube-system","uid":"6cc96039-3a17-4243-93b6-4bf3ed6f69a8","resourceVersion":"328","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.237:8443","kubernetes.io/config.hash":"4ac3958118ce3f6e7dda52fe654787ec","kubernetes.io/config.mirror":"4ac3958118ce3f6e7dda52fe654787ec","kubernetes.io/config.seen":"2023-09-11T11:19:21.954683933Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0911 11:20:25.278237 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:20:25.278251 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.278258 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.278265 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.280431 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:25.280452 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.280462 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.280471 2234986 round_trippers.go:580]     Audit-Id: 2f105d9c-5d2d-4e65-acfb-5f38a7594cb4
	I0911 11:20:25.280480 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.280488 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.280540 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.280553 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.281317 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:20:25.281600 2234986 pod_ready.go:92] pod "kube-apiserver-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:20:25.281612 2234986 pod_ready.go:81] duration metric: took 6.40235ms waiting for pod "kube-apiserver-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:25.281622 2234986 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:25.281675 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-378707
	I0911 11:20:25.281682 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.281689 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.281695 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.284092 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:25.284107 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.284114 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.284120 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.284125 2234986 round_trippers.go:580]     Audit-Id: d1271560-1728-4433-8dd2-de7c820724b4
	I0911 11:20:25.284130 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.284136 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.284144 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.284449 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-378707","namespace":"kube-system","uid":"7bd2ecf1-1558-4680-9075-d30d989a0568","resourceVersion":"294","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee5490370c5fc8b73824fd7337130039","kubernetes.io/config.mirror":"ee5490370c5fc8b73824fd7337130039","kubernetes.io/config.seen":"2023-09-11T11:19:21.954684910Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0911 11:20:25.284880 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:20:25.284893 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.284900 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.284906 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.287359 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:25.287384 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.287394 2234986 round_trippers.go:580]     Audit-Id: da9b9554-c4b6-495f-981d-ac9165a53eb0
	I0911 11:20:25.287403 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.287411 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.287420 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.287428 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.287436 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.287598 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:20:25.287907 2234986 pod_ready.go:92] pod "kube-controller-manager-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:20:25.287924 2234986 pod_ready.go:81] duration metric: took 6.29549ms waiting for pod "kube-controller-manager-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:25.287934 2234986 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8gcxx" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:25.452337 2234986 request.go:629] Waited for 164.322254ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gcxx
	I0911 11:20:25.452402 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gcxx
	I0911 11:20:25.452407 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.452415 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.452422 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.455564 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:25.455590 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.455598 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.455604 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.455615 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.455621 2234986 round_trippers.go:580]     Audit-Id: 65f0c813-f586-44c0-b2a9-4138a09d7815
	I0911 11:20:25.455626 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.455632 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.455741 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8gcxx","generateName":"kube-proxy-","namespace":"kube-system","uid":"f1bb96ad-eb2c-4eeb-b1f8-abb67568b5e7","resourceVersion":"506","creationTimestamp":"2023-09-11T11:20:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0911 11:20:25.651492 2234986 request.go:629] Waited for 195.315662ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:25.651584 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:20:25.651591 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.651603 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.651618 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.654355 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:25.654377 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.654386 2234986 round_trippers.go:580]     Audit-Id: cb9d3a4c-1b30-490d-a94d-47657d6eb4d8
	I0911 11:20:25.654392 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.654399 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.654407 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.654416 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.654424 2234986 round_trippers.go:580]     Content-Length: 3726
	I0911 11:20:25.654433 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.654532 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"519","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 2702 chars]
	I0911 11:20:25.654823 2234986 pod_ready.go:92] pod "kube-proxy-8gcxx" in "kube-system" namespace has status "Ready":"True"
	I0911 11:20:25.654838 2234986 pod_ready.go:81] duration metric: took 366.899026ms waiting for pod "kube-proxy-8gcxx" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:25.654853 2234986 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-snbc8" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:25.852192 2234986 request.go:629] Waited for 197.254795ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-snbc8
	I0911 11:20:25.852282 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-snbc8
	I0911 11:20:25.852289 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:25.852303 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:25.852316 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:25.855485 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:25.855515 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:25.855526 2234986 round_trippers.go:580]     Audit-Id: 58d351ef-2cac-42c2-bd58-d01c03a74ba3
	I0911 11:20:25.855534 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:25.855543 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:25.855551 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:25.855560 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:25.855569 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:25 GMT
	I0911 11:20:25.855764 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-snbc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"c3bb9995-3cd6-4433-a326-3da0a7f4aff3","resourceVersion":"408","creationTimestamp":"2023-09-11T11:19:35Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0911 11:20:26.052490 2234986 request.go:629] Waited for 196.253426ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:20:26.052554 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:20:26.052559 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:26.052566 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:26.052573 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:26.055390 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:26.055421 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:26.055432 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:26.055444 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:26.055452 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:26.055460 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:26 GMT
	I0911 11:20:26.055469 2234986 round_trippers.go:580]     Audit-Id: 79ac3894-fbac-4a2e-b586-9dec44102608
	I0911 11:20:26.055477 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:26.055682 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:20:26.056176 2234986 pod_ready.go:92] pod "kube-proxy-snbc8" in "kube-system" namespace has status "Ready":"True"
	I0911 11:20:26.056198 2234986 pod_ready.go:81] duration metric: took 401.333733ms waiting for pod "kube-proxy-snbc8" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:26.056208 2234986 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:26.251622 2234986 request.go:629] Waited for 195.317016ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-378707
	I0911 11:20:26.251691 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-378707
	I0911 11:20:26.251696 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:26.251704 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:26.251711 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:26.254765 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:26.254788 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:26.254798 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:26.254804 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:26.254810 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:26.254815 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:26.254820 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:26 GMT
	I0911 11:20:26.254826 2234986 round_trippers.go:580]     Audit-Id: 00db2173-57ce-49a6-993f-560a9fda4812
	I0911 11:20:26.254950 2234986 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-378707","namespace":"kube-system","uid":"51055ddb-deff-4b5d-9a90-0cd2c9dc8aa7","resourceVersion":"295","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"47ac46ded21e848957a0f2d3767001da","kubernetes.io/config.mirror":"47ac46ded21e848957a0f2d3767001da","kubernetes.io/config.seen":"2023-09-11T11:19:21.954685589Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0911 11:20:26.451869 2234986 request.go:629] Waited for 196.426809ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:20:26.451950 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:20:26.451955 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:26.451963 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:26.451969 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:26.454654 2234986 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:20:26.454676 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:26.454684 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:26 GMT
	I0911 11:20:26.454689 2234986 round_trippers.go:580]     Audit-Id: 9b4dee67-48e0-4337-9149-75021d114ab2
	I0911 11:20:26.454694 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:26.454700 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:26.454705 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:26.454718 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:26.454869 2234986 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5899 chars]
	I0911 11:20:26.455275 2234986 pod_ready.go:92] pod "kube-scheduler-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:20:26.455297 2234986 pod_ready.go:81] duration metric: took 399.083735ms waiting for pod "kube-scheduler-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:20:26.455309 2234986 pod_ready.go:38] duration metric: took 1.200384887s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:20:26.455324 2234986 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:20:26.455376 2234986 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:20:26.469622 2234986 system_svc.go:56] duration metric: took 14.276662ms WaitForService to wait for kubelet.
	I0911 11:20:26.469653 2234986 kubeadm.go:581] duration metric: took 8.251780035s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:20:26.469675 2234986 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:20:26.652173 2234986 request.go:629] Waited for 182.393617ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes
	I0911 11:20:26.652249 2234986 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes
	I0911 11:20:26.652256 2234986 round_trippers.go:469] Request Headers:
	I0911 11:20:26.652267 2234986 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:20:26.652279 2234986 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:20:26.655308 2234986 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:20:26.655331 2234986 round_trippers.go:577] Response Headers:
	I0911 11:20:26.655339 2234986 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:20:26 GMT
	I0911 11:20:26.655345 2234986 round_trippers.go:580]     Audit-Id: 76e90f05-5694-4026-8222-03570e122ba2
	I0911 11:20:26.655352 2234986 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:20:26.655361 2234986 round_trippers.go:580]     Content-Type: application/json
	I0911 11:20:26.655375 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:20:26.655383 2234986 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:20:26.655663 2234986 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"519"},"items":[{"metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"418","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 9646 chars]
	I0911 11:20:26.656260 2234986 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:20:26.656281 2234986 node_conditions.go:123] node cpu capacity is 2
	I0911 11:20:26.656292 2234986 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:20:26.656296 2234986 node_conditions.go:123] node cpu capacity is 2
	I0911 11:20:26.656300 2234986 node_conditions.go:105] duration metric: took 186.620335ms to run NodePressure ...
	I0911 11:20:26.656314 2234986 start.go:228] waiting for startup goroutines ...
	I0911 11:20:26.656342 2234986 start.go:242] writing updated cluster config ...
	I0911 11:20:26.656648 2234986 ssh_runner.go:195] Run: rm -f paused
	I0911 11:20:26.706268 2234986 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 11:20:26.708717 2234986 out.go:177] * Done! kubectl is now configured to use "multinode-378707" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 11:18:49 UTC, ends at Mon 2023-09-11 11:20:35 UTC. --
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.340568028Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:4b2ed411ea17462812eec875fd6e4849e5ecf55e81b36eeafae923eadcd9aa81,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-4jnst,Uid:6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431227804015169,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:20:27.466687631Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c3ccde8f6334177cd5327c22bf1547536531a9c620b6759514522e6834a49e9b,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-fzpjk,Uid:f72f6ba0-92a3-4108-a37f-e6ad5009c37c,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1694431181990120322,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:19:41.627820101Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:cf848094509b2fbfae7f57ad324d54fe46f88e9f6820775c67df45402b74831f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:77e1a93d-fc34-4f05-8320-169bb6c93e46,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431181976513584,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]st
ring{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-11T11:19:41.634243708Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:aa04afad9741d792f9b751187ce780c3006303b4a46447d9021979d15b2d79a8,Metadata:&PodSandboxMetadata{Name:kube-proxy-snbc8,Uid:c3bb9995-3cd6-4433-a326-3da0a7f4aff3,Namespace:kube-system,At
tempt:0,},State:SANDBOX_READY,CreatedAt:1694431177804167628,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7f4aff3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:19:35.670251479Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:63b7a441bbbe1b8352e03448548e93ff1e939756294f6d1a40525b7c51d97976,Metadata:&PodSandboxMetadata{Name:kindnet-gxpnd,Uid:e59da67c-e818-45db-bbcd-db99a4310bf1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431177392185870,Labels:map[string]string{app: kindnet,controller-revision-hash: 77b9cf4878,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e59da67c-e818-45db-bbcd-db99a4310bf1,k8s-app: kindnet,pod-template-gener
ation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:19:36.160471829Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:74f7e1509b90bea51d9adb6bdd214960978b932941284cc6677fc0a9dfced7c8,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-378707,Uid:47ac46ded21e848957a0f2d3767001da,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431153510556088,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 47ac46ded21e848957a0f2d3767001da,kubernetes.io/config.seen: 2023-09-11T11:19:12.948374189Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:fe2b3c0afa3c9b8183f699463cf278485e68795b3a7fa1b045346ffca71c10c7,Metadata:&PodSandboxMetadata{Name:kube-apiserver-mul
tinode-378707,Uid:4ac3958118ce3f6e7dda52fe654787ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431153502761874,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.237:8443,kubernetes.io/config.hash: 4ac3958118ce3f6e7dda52fe654787ec,kubernetes.io/config.seen: 2023-09-11T11:19:12.948372242Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:0cd50f29256bb99dc070874c7f4c58ec5afa4df63b9e7af5db603127aaa7dd2e,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-378707,Uid:ee5490370c5fc8b73824fd7337130039,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431153463273383,Labels:map[string]string{component: kube-controller-manager,io.kub
ernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ee5490370c5fc8b73824fd7337130039,kubernetes.io/config.seen: 2023-09-11T11:19:12.948373350Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8f12bbb7d8f2607124a9ffa906a5e47ef4d78ba7e707c50627b143a9572daca4,Metadata:&PodSandboxMetadata{Name:etcd-multinode-378707,Uid:301ff3085dd9ceb3eda8ae352974f3c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431153456302343,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.237:2379,kuber
netes.io/config.hash: 301ff3085dd9ceb3eda8ae352974f3c3,kubernetes.io/config.seen: 2023-09-11T11:19:12.948367943Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=3ef5fe42-4583-412d-ace7-ba91c49a542e name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.341303890Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=77878af2-8ae5-45bb-aa7f-162780a2d8b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.341442855Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=77878af2-8ae5-45bb-aa7f-162780a2d8b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.341635916Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1825e9730adc54123cdeb1b778380859f65351d1e2f021ebc235db5c492da7e4,PodSandboxId:4b2ed411ea17462812eec875fd6e4849e5ecf55e81b36eeafae923eadcd9aa81,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431229304495224,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19acc1c5ecec5a6e2eb6a92774ba317c699485182ca922264366dac3715068c,PodSandboxId:c3ccde8f6334177cd5327c22bf1547536531a9c620b6759514522e6834a49e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431182804241987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d223c912c5b4ea739cd3607a37866fdeca4f559038398d83d3c28a00c3a3fd,PodSandboxId:cf848094509b2fbfae7f57ad324d54fe46f88e9f6820775c67df45402b74831f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431182480824338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41490e24fb7c2701c92b7685d8e1c8e2919baabd80781646ad51c696e0866b6,PodSandboxId:63b7a441bbbe1b8352e03448548e93ff1e939756294f6d1a40525b7c51d97976,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431180230255206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9002fc32a5c9ac337e16aee3acaed76ea07c268d0e0ffcd1e4ee2983fa8ed7,PodSandboxId:aa04afad9741d792f9b751187ce780c3006303b4a46447d9021979d15b2d79a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431178176614003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3361dad963f765917e011a70c279607ea1caa3b974f996ac638de80436d3cce,PodSandboxId:74f7e1509b90bea51d9adb6bdd214960978b932941284cc6677fc0a9dfced7c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431154566171672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Ann
otations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a78ed76056583e9af291fabdf8b7bc77223affcfa922ac64d0b8da5a07f2f0,PodSandboxId:8f12bbb7d8f2607124a9ffa906a5e47ef4d78ba7e707c50627b143a9572daca4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431154456865009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.h
ash: 91e53050,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f331133be7dc95bba693aa119ec0fa378d2d29ddefd6988ea9aaa2df0d438,PodSandboxId:fe2b3c0afa3c9b8183f699463cf278485e68795b3a7fa1b045346ffca71c10c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431154306052036,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe33209
6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3fb9016feafb5488573586633b12d6e2c3a523279727f29eaa0866b0428ee4,PodSandboxId:0cd50f29256bb99dc070874c7f4c58ec5afa4df63b9e7af5db603127aaa7dd2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431153973334649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes
.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=77878af2-8ae5-45bb-aa7f-162780a2d8b7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.871670879Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c379c6d8-687e-4e0c-881a-4617d979ae96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.871733574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c379c6d8-687e-4e0c-881a-4617d979ae96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.871939593Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1825e9730adc54123cdeb1b778380859f65351d1e2f021ebc235db5c492da7e4,PodSandboxId:4b2ed411ea17462812eec875fd6e4849e5ecf55e81b36eeafae923eadcd9aa81,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431229304495224,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19acc1c5ecec5a6e2eb6a92774ba317c699485182ca922264366dac3715068c,PodSandboxId:c3ccde8f6334177cd5327c22bf1547536531a9c620b6759514522e6834a49e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431182804241987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d223c912c5b4ea739cd3607a37866fdeca4f559038398d83d3c28a00c3a3fd,PodSandboxId:cf848094509b2fbfae7f57ad324d54fe46f88e9f6820775c67df45402b74831f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431182480824338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41490e24fb7c2701c92b7685d8e1c8e2919baabd80781646ad51c696e0866b6,PodSandboxId:63b7a441bbbe1b8352e03448548e93ff1e939756294f6d1a40525b7c51d97976,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431180230255206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9002fc32a5c9ac337e16aee3acaed76ea07c268d0e0ffcd1e4ee2983fa8ed7,PodSandboxId:aa04afad9741d792f9b751187ce780c3006303b4a46447d9021979d15b2d79a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431178176614003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3361dad963f765917e011a70c279607ea1caa3b974f996ac638de80436d3cce,PodSandboxId:74f7e1509b90bea51d9adb6bdd214960978b932941284cc6677fc0a9dfced7c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431154566171672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Ann
otations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a78ed76056583e9af291fabdf8b7bc77223affcfa922ac64d0b8da5a07f2f0,PodSandboxId:8f12bbb7d8f2607124a9ffa906a5e47ef4d78ba7e707c50627b143a9572daca4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431154456865009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.h
ash: 91e53050,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f331133be7dc95bba693aa119ec0fa378d2d29ddefd6988ea9aaa2df0d438,PodSandboxId:fe2b3c0afa3c9b8183f699463cf278485e68795b3a7fa1b045346ffca71c10c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431154306052036,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe33209
6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3fb9016feafb5488573586633b12d6e2c3a523279727f29eaa0866b0428ee4,PodSandboxId:0cd50f29256bb99dc070874c7f4c58ec5afa4df63b9e7af5db603127aaa7dd2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431153973334649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes
.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c379c6d8-687e-4e0c-881a-4617d979ae96 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.912728519Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=62be3a75-f175-450f-9fb9-5906c77c6526 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.912793975Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=62be3a75-f175-450f-9fb9-5906c77c6526 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.913088318Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1825e9730adc54123cdeb1b778380859f65351d1e2f021ebc235db5c492da7e4,PodSandboxId:4b2ed411ea17462812eec875fd6e4849e5ecf55e81b36eeafae923eadcd9aa81,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431229304495224,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19acc1c5ecec5a6e2eb6a92774ba317c699485182ca922264366dac3715068c,PodSandboxId:c3ccde8f6334177cd5327c22bf1547536531a9c620b6759514522e6834a49e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431182804241987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d223c912c5b4ea739cd3607a37866fdeca4f559038398d83d3c28a00c3a3fd,PodSandboxId:cf848094509b2fbfae7f57ad324d54fe46f88e9f6820775c67df45402b74831f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431182480824338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41490e24fb7c2701c92b7685d8e1c8e2919baabd80781646ad51c696e0866b6,PodSandboxId:63b7a441bbbe1b8352e03448548e93ff1e939756294f6d1a40525b7c51d97976,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431180230255206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9002fc32a5c9ac337e16aee3acaed76ea07c268d0e0ffcd1e4ee2983fa8ed7,PodSandboxId:aa04afad9741d792f9b751187ce780c3006303b4a46447d9021979d15b2d79a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431178176614003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3361dad963f765917e011a70c279607ea1caa3b974f996ac638de80436d3cce,PodSandboxId:74f7e1509b90bea51d9adb6bdd214960978b932941284cc6677fc0a9dfced7c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431154566171672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Ann
otations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a78ed76056583e9af291fabdf8b7bc77223affcfa922ac64d0b8da5a07f2f0,PodSandboxId:8f12bbb7d8f2607124a9ffa906a5e47ef4d78ba7e707c50627b143a9572daca4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431154456865009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.h
ash: 91e53050,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f331133be7dc95bba693aa119ec0fa378d2d29ddefd6988ea9aaa2df0d438,PodSandboxId:fe2b3c0afa3c9b8183f699463cf278485e68795b3a7fa1b045346ffca71c10c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431154306052036,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe33209
6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3fb9016feafb5488573586633b12d6e2c3a523279727f29eaa0866b0428ee4,PodSandboxId:0cd50f29256bb99dc070874c7f4c58ec5afa4df63b9e7af5db603127aaa7dd2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431153973334649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes
.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=62be3a75-f175-450f-9fb9-5906c77c6526 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.947593926Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=23b6e64d-7c37-42c8-b89e-9ab965ec22d1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.947655959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=23b6e64d-7c37-42c8-b89e-9ab965ec22d1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.947881293Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1825e9730adc54123cdeb1b778380859f65351d1e2f021ebc235db5c492da7e4,PodSandboxId:4b2ed411ea17462812eec875fd6e4849e5ecf55e81b36eeafae923eadcd9aa81,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431229304495224,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19acc1c5ecec5a6e2eb6a92774ba317c699485182ca922264366dac3715068c,PodSandboxId:c3ccde8f6334177cd5327c22bf1547536531a9c620b6759514522e6834a49e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431182804241987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d223c912c5b4ea739cd3607a37866fdeca4f559038398d83d3c28a00c3a3fd,PodSandboxId:cf848094509b2fbfae7f57ad324d54fe46f88e9f6820775c67df45402b74831f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431182480824338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41490e24fb7c2701c92b7685d8e1c8e2919baabd80781646ad51c696e0866b6,PodSandboxId:63b7a441bbbe1b8352e03448548e93ff1e939756294f6d1a40525b7c51d97976,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431180230255206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9002fc32a5c9ac337e16aee3acaed76ea07c268d0e0ffcd1e4ee2983fa8ed7,PodSandboxId:aa04afad9741d792f9b751187ce780c3006303b4a46447d9021979d15b2d79a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431178176614003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3361dad963f765917e011a70c279607ea1caa3b974f996ac638de80436d3cce,PodSandboxId:74f7e1509b90bea51d9adb6bdd214960978b932941284cc6677fc0a9dfced7c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431154566171672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Ann
otations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a78ed76056583e9af291fabdf8b7bc77223affcfa922ac64d0b8da5a07f2f0,PodSandboxId:8f12bbb7d8f2607124a9ffa906a5e47ef4d78ba7e707c50627b143a9572daca4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431154456865009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.h
ash: 91e53050,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f331133be7dc95bba693aa119ec0fa378d2d29ddefd6988ea9aaa2df0d438,PodSandboxId:fe2b3c0afa3c9b8183f699463cf278485e68795b3a7fa1b045346ffca71c10c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431154306052036,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe33209
6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3fb9016feafb5488573586633b12d6e2c3a523279727f29eaa0866b0428ee4,PodSandboxId:0cd50f29256bb99dc070874c7f4c58ec5afa4df63b9e7af5db603127aaa7dd2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431153973334649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes
.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=23b6e64d-7c37-42c8-b89e-9ab965ec22d1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.988500240Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2f02696c-4e60-41e8-ad2c-ef690cf6f1ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.988563477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2f02696c-4e60-41e8-ad2c-ef690cf6f1ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:34 multinode-378707 crio[718]: time="2023-09-11 11:20:34.988832253Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1825e9730adc54123cdeb1b778380859f65351d1e2f021ebc235db5c492da7e4,PodSandboxId:4b2ed411ea17462812eec875fd6e4849e5ecf55e81b36eeafae923eadcd9aa81,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431229304495224,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19acc1c5ecec5a6e2eb6a92774ba317c699485182ca922264366dac3715068c,PodSandboxId:c3ccde8f6334177cd5327c22bf1547536531a9c620b6759514522e6834a49e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431182804241987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d223c912c5b4ea739cd3607a37866fdeca4f559038398d83d3c28a00c3a3fd,PodSandboxId:cf848094509b2fbfae7f57ad324d54fe46f88e9f6820775c67df45402b74831f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431182480824338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41490e24fb7c2701c92b7685d8e1c8e2919baabd80781646ad51c696e0866b6,PodSandboxId:63b7a441bbbe1b8352e03448548e93ff1e939756294f6d1a40525b7c51d97976,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431180230255206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9002fc32a5c9ac337e16aee3acaed76ea07c268d0e0ffcd1e4ee2983fa8ed7,PodSandboxId:aa04afad9741d792f9b751187ce780c3006303b4a46447d9021979d15b2d79a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431178176614003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3361dad963f765917e011a70c279607ea1caa3b974f996ac638de80436d3cce,PodSandboxId:74f7e1509b90bea51d9adb6bdd214960978b932941284cc6677fc0a9dfced7c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431154566171672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Ann
otations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a78ed76056583e9af291fabdf8b7bc77223affcfa922ac64d0b8da5a07f2f0,PodSandboxId:8f12bbb7d8f2607124a9ffa906a5e47ef4d78ba7e707c50627b143a9572daca4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431154456865009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.h
ash: 91e53050,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f331133be7dc95bba693aa119ec0fa378d2d29ddefd6988ea9aaa2df0d438,PodSandboxId:fe2b3c0afa3c9b8183f699463cf278485e68795b3a7fa1b045346ffca71c10c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431154306052036,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe33209
6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3fb9016feafb5488573586633b12d6e2c3a523279727f29eaa0866b0428ee4,PodSandboxId:0cd50f29256bb99dc070874c7f4c58ec5afa4df63b9e7af5db603127aaa7dd2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431153973334649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes
.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2f02696c-4e60-41e8-ad2c-ef690cf6f1ca name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:35 multinode-378707 crio[718]: time="2023-09-11 11:20:35.025831352Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fd4358af-a825-4abf-a315-129aadcc0966 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:35 multinode-378707 crio[718]: time="2023-09-11 11:20:35.025895125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fd4358af-a825-4abf-a315-129aadcc0966 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:35 multinode-378707 crio[718]: time="2023-09-11 11:20:35.026265607Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1825e9730adc54123cdeb1b778380859f65351d1e2f021ebc235db5c492da7e4,PodSandboxId:4b2ed411ea17462812eec875fd6e4849e5ecf55e81b36eeafae923eadcd9aa81,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431229304495224,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19acc1c5ecec5a6e2eb6a92774ba317c699485182ca922264366dac3715068c,PodSandboxId:c3ccde8f6334177cd5327c22bf1547536531a9c620b6759514522e6834a49e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431182804241987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d223c912c5b4ea739cd3607a37866fdeca4f559038398d83d3c28a00c3a3fd,PodSandboxId:cf848094509b2fbfae7f57ad324d54fe46f88e9f6820775c67df45402b74831f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431182480824338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41490e24fb7c2701c92b7685d8e1c8e2919baabd80781646ad51c696e0866b6,PodSandboxId:63b7a441bbbe1b8352e03448548e93ff1e939756294f6d1a40525b7c51d97976,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431180230255206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9002fc32a5c9ac337e16aee3acaed76ea07c268d0e0ffcd1e4ee2983fa8ed7,PodSandboxId:aa04afad9741d792f9b751187ce780c3006303b4a46447d9021979d15b2d79a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431178176614003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3361dad963f765917e011a70c279607ea1caa3b974f996ac638de80436d3cce,PodSandboxId:74f7e1509b90bea51d9adb6bdd214960978b932941284cc6677fc0a9dfced7c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431154566171672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Ann
otations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a78ed76056583e9af291fabdf8b7bc77223affcfa922ac64d0b8da5a07f2f0,PodSandboxId:8f12bbb7d8f2607124a9ffa906a5e47ef4d78ba7e707c50627b143a9572daca4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431154456865009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.h
ash: 91e53050,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f331133be7dc95bba693aa119ec0fa378d2d29ddefd6988ea9aaa2df0d438,PodSandboxId:fe2b3c0afa3c9b8183f699463cf278485e68795b3a7fa1b045346ffca71c10c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431154306052036,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe33209
6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3fb9016feafb5488573586633b12d6e2c3a523279727f29eaa0866b0428ee4,PodSandboxId:0cd50f29256bb99dc070874c7f4c58ec5afa4df63b9e7af5db603127aaa7dd2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431153973334649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes
.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fd4358af-a825-4abf-a315-129aadcc0966 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:35 multinode-378707 crio[718]: time="2023-09-11 11:20:35.061351336Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=82f39f5e-af9f-422c-91dd-51d8cb8f5df2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:35 multinode-378707 crio[718]: time="2023-09-11 11:20:35.061417423Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=82f39f5e-af9f-422c-91dd-51d8cb8f5df2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:35 multinode-378707 crio[718]: time="2023-09-11 11:20:35.061606131Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1825e9730adc54123cdeb1b778380859f65351d1e2f021ebc235db5c492da7e4,PodSandboxId:4b2ed411ea17462812eec875fd6e4849e5ecf55e81b36eeafae923eadcd9aa81,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431229304495224,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19acc1c5ecec5a6e2eb6a92774ba317c699485182ca922264366dac3715068c,PodSandboxId:c3ccde8f6334177cd5327c22bf1547536531a9c620b6759514522e6834a49e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431182804241987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d223c912c5b4ea739cd3607a37866fdeca4f559038398d83d3c28a00c3a3fd,PodSandboxId:cf848094509b2fbfae7f57ad324d54fe46f88e9f6820775c67df45402b74831f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431182480824338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41490e24fb7c2701c92b7685d8e1c8e2919baabd80781646ad51c696e0866b6,PodSandboxId:63b7a441bbbe1b8352e03448548e93ff1e939756294f6d1a40525b7c51d97976,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431180230255206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9002fc32a5c9ac337e16aee3acaed76ea07c268d0e0ffcd1e4ee2983fa8ed7,PodSandboxId:aa04afad9741d792f9b751187ce780c3006303b4a46447d9021979d15b2d79a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431178176614003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3361dad963f765917e011a70c279607ea1caa3b974f996ac638de80436d3cce,PodSandboxId:74f7e1509b90bea51d9adb6bdd214960978b932941284cc6677fc0a9dfced7c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431154566171672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Ann
otations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a78ed76056583e9af291fabdf8b7bc77223affcfa922ac64d0b8da5a07f2f0,PodSandboxId:8f12bbb7d8f2607124a9ffa906a5e47ef4d78ba7e707c50627b143a9572daca4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431154456865009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.h
ash: 91e53050,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f331133be7dc95bba693aa119ec0fa378d2d29ddefd6988ea9aaa2df0d438,PodSandboxId:fe2b3c0afa3c9b8183f699463cf278485e68795b3a7fa1b045346ffca71c10c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431154306052036,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe33209
6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3fb9016feafb5488573586633b12d6e2c3a523279727f29eaa0866b0428ee4,PodSandboxId:0cd50f29256bb99dc070874c7f4c58ec5afa4df63b9e7af5db603127aaa7dd2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431153973334649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes
.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=82f39f5e-af9f-422c-91dd-51d8cb8f5df2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:35 multinode-378707 crio[718]: time="2023-09-11 11:20:35.120187551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b1030182-0dff-4e90-9918-820815f17e28 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:35 multinode-378707 crio[718]: time="2023-09-11 11:20:35.120276071Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b1030182-0dff-4e90-9918-820815f17e28 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:20:35 multinode-378707 crio[718]: time="2023-09-11 11:20:35.120535986Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:1825e9730adc54123cdeb1b778380859f65351d1e2f021ebc235db5c492da7e4,PodSandboxId:4b2ed411ea17462812eec875fd6e4849e5ecf55e81b36eeafae923eadcd9aa81,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431229304495224,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f19acc1c5ecec5a6e2eb6a92774ba317c699485182ca922264366dac3715068c,PodSandboxId:c3ccde8f6334177cd5327c22bf1547536531a9c620b6759514522e6834a49e9b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431182804241987,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e9d223c912c5b4ea739cd3607a37866fdeca4f559038398d83d3c28a00c3a3fd,PodSandboxId:cf848094509b2fbfae7f57ad324d54fe46f88e9f6820775c67df45402b74831f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431182480824338,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.names
pace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f41490e24fb7c2701c92b7685d8e1c8e2919baabd80781646ad51c696e0866b6,PodSandboxId:63b7a441bbbe1b8352e03448548e93ff1e939756294f6d1a40525b7c51d97976,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431180230255206,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b9002fc32a5c9ac337e16aee3acaed76ea07c268d0e0ffcd1e4ee2983fa8ed7,PodSandboxId:aa04afad9741d792f9b751187ce780c3006303b4a46447d9021979d15b2d79a8,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431178176614003,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e3361dad963f765917e011a70c279607ea1caa3b974f996ac638de80436d3cce,PodSandboxId:74f7e1509b90bea51d9adb6bdd214960978b932941284cc6677fc0a9dfced7c8,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431154566171672,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Ann
otations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:47a78ed76056583e9af291fabdf8b7bc77223affcfa922ac64d0b8da5a07f2f0,PodSandboxId:8f12bbb7d8f2607124a9ffa906a5e47ef4d78ba7e707c50627b143a9572daca4,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431154456865009,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.h
ash: 91e53050,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b09f331133be7dc95bba693aa119ec0fa378d2d29ddefd6988ea9aaa2df0d438,PodSandboxId:fe2b3c0afa3c9b8183f699463cf278485e68795b3a7fa1b045346ffca71c10c7,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431154306052036,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe33209
6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd3fb9016feafb5488573586633b12d6e2c3a523279727f29eaa0866b0428ee4,PodSandboxId:0cd50f29256bb99dc070874c7f4c58ec5afa4df63b9e7af5db603127aaa7dd2e,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431153973334649,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes
.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b1030182-0dff-4e90-9918-820815f17e28 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID
	1825e9730adc5       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   5 seconds ago        Running             busybox                   0                   4b2ed411ea174
	f19acc1c5ecec       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      52 seconds ago       Running             coredns                   0                   c3ccde8f63341
	e9d223c912c5b       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      52 seconds ago       Running             storage-provisioner       0                   cf848094509b2
	f41490e24fb7c       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      54 seconds ago       Running             kindnet-cni               0                   63b7a441bbbe1
	4b9002fc32a5c       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      57 seconds ago       Running             kube-proxy                0                   aa04afad9741d
	e3361dad963f7       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      About a minute ago   Running             kube-scheduler            0                   74f7e1509b90b
	47a78ed760565       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      0                   8f12bbb7d8f26
	b09f331133be7       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      About a minute ago   Running             kube-apiserver            0                   fe2b3c0afa3c9
	bd3fb9016feaf       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      About a minute ago   Running             kube-controller-manager   0                   0cd50f29256bb
	
	* 
	* ==> coredns [f19acc1c5ecec5a6e2eb6a92774ba317c699485182ca922264366dac3715068c] <==
	* [INFO] 10.244.0.3:50678 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000139411s
	[INFO] 10.244.1.2:56704 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136145s
	[INFO] 10.244.1.2:40090 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.002199542s
	[INFO] 10.244.1.2:57312 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000106574s
	[INFO] 10.244.1.2:50951 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000080549s
	[INFO] 10.244.1.2:51013 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001377358s
	[INFO] 10.244.1.2:58231 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00008607s
	[INFO] 10.244.1.2:45745 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000073619s
	[INFO] 10.244.1.2:38352 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00007039s
	[INFO] 10.244.0.3:33823 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00012192s
	[INFO] 10.244.0.3:35791 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000097544s
	[INFO] 10.244.0.3:55992 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000076199s
	[INFO] 10.244.0.3:60907 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000071241s
	[INFO] 10.244.1.2:52984 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000159384s
	[INFO] 10.244.1.2:47704 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143649s
	[INFO] 10.244.1.2:37106 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000118857s
	[INFO] 10.244.1.2:33239 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000089174s
	[INFO] 10.244.0.3:35137 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000218713s
	[INFO] 10.244.0.3:50713 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000155442s
	[INFO] 10.244.0.3:49914 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000267488s
	[INFO] 10.244.0.3:47050 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.0002105s
	[INFO] 10.244.1.2:41954 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000180837s
	[INFO] 10.244.1.2:51645 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00018804s
	[INFO] 10.244.1.2:49065 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00009295s
	[INFO] 10.244.1.2:39283 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000188268s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-378707
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-378707
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=multinode-378707
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_19_23_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:19:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-378707
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:20:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:19:41 +0000   Mon, 11 Sep 2023 11:19:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:19:41 +0000   Mon, 11 Sep 2023 11:19:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:19:41 +0000   Mon, 11 Sep 2023 11:19:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:19:41 +0000   Mon, 11 Sep 2023 11:19:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    multinode-378707
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6530470fdaf445d7b75a40804cd959a7
	  System UUID:                6530470f-daf4-45d7-b75a-40804cd959a7
	  Boot ID:                    2b9add0c-94c8-4ee1-9c73-d7c1a39abfa3
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-4jnst                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 coredns-5dd5756b68-fzpjk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     59s
	  kube-system                 etcd-multinode-378707                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         73s
	  kube-system                 kindnet-gxpnd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      60s
	  kube-system                 kube-apiserver-multinode-378707             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-controller-manager-multinode-378707    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 kube-proxy-snbc8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         60s
	  kube-system                 kube-scheduler-multinode-378707             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         73s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         59s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 56s                kube-proxy       
	  Normal  Starting                 83s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  82s (x8 over 83s)  kubelet          Node multinode-378707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    82s (x8 over 83s)  kubelet          Node multinode-378707 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     82s (x7 over 83s)  kubelet          Node multinode-378707 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  82s                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 74s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  73s                kubelet          Node multinode-378707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    73s                kubelet          Node multinode-378707 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     73s                kubelet          Node multinode-378707 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  73s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           61s                node-controller  Node multinode-378707 event: Registered Node multinode-378707 in Controller
	  Normal  NodeReady                54s                kubelet          Node multinode-378707 status is now: NodeReady
	
	
	Name:               multinode-378707-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-378707-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:20:16 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-378707-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:20:27 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:20:25 +0000   Mon, 11 Sep 2023 11:20:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:20:25 +0000   Mon, 11 Sep 2023 11:20:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:20:25 +0000   Mon, 11 Sep 2023 11:20:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:20:25 +0000   Mon, 11 Sep 2023 11:20:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    multinode-378707-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9a79f30993549fd8eb95ddb2e1d94fa
	  System UUID:                e9a79f30-9935-49fd-8eb9-5ddb2e1d94fa
	  Boot ID:                    efae247c-51a2-42e8-a0d1-33d44305a36f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-f9d7x    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-p8h9v               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18s
	  kube-system                 kube-proxy-8gcxx            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14s                kube-proxy       
	  Normal  NodeHasSufficientMemory  19s (x5 over 20s)  kubelet          Node multinode-378707-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19s (x5 over 20s)  kubelet          Node multinode-378707-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19s (x5 over 20s)  kubelet          Node multinode-378707-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           16s                node-controller  Node multinode-378707-m02 event: Registered Node multinode-378707-m02 in Controller
	  Normal  NodeReady                10s                kubelet          Node multinode-378707-m02 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Sep11 11:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.093837] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.646274] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.728847] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.139596] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.063060] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep11 11:19] systemd-fstab-generator[644]: Ignoring "noauto" for root device
	[  +0.116309] systemd-fstab-generator[655]: Ignoring "noauto" for root device
	[  +0.166818] systemd-fstab-generator[668]: Ignoring "noauto" for root device
	[  +0.122993] systemd-fstab-generator[679]: Ignoring "noauto" for root device
	[  +0.236674] systemd-fstab-generator[703]: Ignoring "noauto" for root device
	[  +9.576485] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[  +9.283822] systemd-fstab-generator[1271]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [47a78ed76056583e9af291fabdf8b7bc77223affcfa922ac64d0b8da5a07f2f0] <==
	* {"level":"info","ts":"2023-09-11T11:19:16.753058Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"db2c13b3d7f66f6a","local-member-id":"3f0f97df8a50e0be","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:19:16.753185Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:19:16.753232Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:19:23.21465Z","caller":"traceutil/trace.go:171","msg":"trace[975970680] transaction","detail":"{read_only:false; response_revision:276; number_of_response:1; }","duration":"139.272554ms","start":"2023-09-11T11:19:23.075354Z","end":"2023-09-11T11:19:23.214627Z","steps":["trace[975970680] 'process raft request'  (duration: 139.057668ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:19:23.221061Z","caller":"traceutil/trace.go:171","msg":"trace[2088769666] transaction","detail":"{read_only:false; response_revision:277; number_of_response:1; }","duration":"124.885614ms","start":"2023-09-11T11:19:23.096131Z","end":"2023-09-11T11:19:23.221016Z","steps":["trace[2088769666] 'process raft request'  (duration: 123.934879ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:20:17.114873Z","caller":"traceutil/trace.go:171","msg":"trace[2128698939] linearizableReadLoop","detail":"{readStateIndex:491; appliedIndex:490; }","duration":"127.468068ms","start":"2023-09-11T11:20:16.987364Z","end":"2023-09-11T11:20:17.114832Z","steps":["trace[2128698939] 'read index received'  (duration: 62.314322ms)","trace[2128698939] 'applied index is now lower than readState.Index'  (duration: 65.152795ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-11T11:20:17.115293Z","caller":"traceutil/trace.go:171","msg":"trace[2085093775] transaction","detail":"{read_only:false; response_revision:474; number_of_response:1; }","duration":"137.112409ms","start":"2023-09-11T11:20:16.978151Z","end":"2023-09-11T11:20:17.115264Z","steps":["trace[2085093775] 'process raft request'  (duration: 71.583535ms)","trace[2085093775] 'compare'  (duration: 64.990849ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-11T11:20:17.115495Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.172143ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/events/default/multinode-378707-m02.1783d43687cecece\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T11:20:17.115643Z","caller":"traceutil/trace.go:171","msg":"trace[1818427116] range","detail":"{range_begin:/registry/events/default/multinode-378707-m02.1783d43687cecece; range_end:; response_count:0; response_revision:474; }","duration":"138.337438ms","start":"2023-09-11T11:20:16.977294Z","end":"2023-09-11T11:20:17.115631Z","steps":["trace[1818427116] 'agreement among raft nodes before linearized reading'  (duration: 138.028825ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T11:20:17.512338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"253.433388ms","expected-duration":"100ms","prefix":"","request":"header:<ID:16194533609470145183 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/minions/multinode-378707-m02\" mod_revision:474 > success:<request_put:<key:\"/registry/minions/multinode-378707-m02\" value_size:2096 >> failure:<request_range:<key:\"/registry/minions/multinode-378707-m02\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-11T11:20:17.512551Z","caller":"traceutil/trace.go:171","msg":"trace[1976283909] transaction","detail":"{read_only:false; response_revision:475; number_of_response:1; }","duration":"384.293218ms","start":"2023-09-11T11:20:17.128245Z","end":"2023-09-11T11:20:17.512538Z","steps":["trace[1976283909] 'process raft request'  (duration: 130.201184ms)","trace[1976283909] 'compare'  (duration: 253.345666ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-11T11:20:17.512584Z","caller":"traceutil/trace.go:171","msg":"trace[336784110] transaction","detail":"{read_only:false; number_of_response:1; response_revision:476; }","duration":"382.973123ms","start":"2023-09-11T11:20:17.129595Z","end":"2023-09-11T11:20:17.512568Z","steps":["trace[336784110] 'process raft request'  (duration: 382.898353ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:20:17.512724Z","caller":"traceutil/trace.go:171","msg":"trace[587198960] transaction","detail":"{read_only:false; number_of_response:1; response_revision:476; }","duration":"383.026305ms","start":"2023-09-11T11:20:17.129692Z","end":"2023-09-11T11:20:17.512719Z","steps":["trace[587198960] 'process raft request'  (duration: 382.822165ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T11:20:17.5128Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T11:20:17.129688Z","time spent":"383.087098ms","remote":"127.0.0.1:43940","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":42,"response count":0,"response size":2191,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-378707-m02\" mod_revision:474 > success:<request_put:<key:\"/registry/minions/multinode-378707-m02\" value_size:2247 >> failure:<request_range:<key:\"/registry/minions/multinode-378707-m02\" > >"}
	{"level":"warn","ts":"2023-09-11T11:20:17.512808Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T11:20:17.129586Z","time spent":"383.140666ms","remote":"127.0.0.1:43940","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":42,"response count":0,"response size":2191,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-378707-m02\" mod_revision:474 > success:<request_put:<key:\"/registry/minions/multinode-378707-m02\" value_size:2099 >> failure:<request_range:<key:\"/registry/minions/multinode-378707-m02\" > >"}
	{"level":"info","ts":"2023-09-11T11:20:17.513008Z","caller":"traceutil/trace.go:171","msg":"trace[2089109200] transaction","detail":"{read_only:false; response_revision:476; number_of_response:1; }","duration":"383.676011ms","start":"2023-09-11T11:20:17.129265Z","end":"2023-09-11T11:20:17.512941Z","steps":["trace[2089109200] 'process raft request'  (duration: 383.173092ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:20:17.513041Z","caller":"traceutil/trace.go:171","msg":"trace[1119922982] linearizableReadLoop","detail":"{readStateIndex:495; appliedIndex:491; }","duration":"382.272328ms","start":"2023-09-11T11:20:17.130761Z","end":"2023-09-11T11:20:17.513033Z","steps":["trace[1119922982] 'read index received'  (duration: 128.002569ms)","trace[1119922982] 'applied index is now lower than readState.Index'  (duration: 254.268681ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-11T11:20:17.51311Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T11:20:17.12925Z","time spent":"383.799112ms","remote":"127.0.0.1:43916","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":726,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/events/default/multinode-378707-m02.1783d43687cecece\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/multinode-378707-m02.1783d43687cecece\" value_size:646 lease:6971161572615368683 >> failure:<>"}
	{"level":"warn","ts":"2023-09-11T11:20:17.513206Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"382.455834ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T11:20:17.513259Z","caller":"traceutil/trace.go:171","msg":"trace[792539952] range","detail":"{range_begin:/registry/limitranges/kube-system/; range_end:/registry/limitranges/kube-system0; response_count:0; response_revision:476; }","duration":"382.508158ms","start":"2023-09-11T11:20:17.130741Z","end":"2023-09-11T11:20:17.513249Z","steps":["trace[792539952] 'agreement among raft nodes before linearized reading'  (duration: 382.408237ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T11:20:17.513287Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T11:20:17.130731Z","time spent":"382.550595ms","remote":"127.0.0.1:43932","response type":"/etcdserverpb.KV/Range","request count":0,"request size":72,"response count":0,"response size":28,"request content":"key:\"/registry/limitranges/kube-system/\" range_end:\"/registry/limitranges/kube-system0\" "}
	{"level":"warn","ts":"2023-09-11T11:20:17.512633Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T11:20:17.128229Z","time spent":"384.377574ms","remote":"127.0.0.1:43940","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2142,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/multinode-378707-m02\" mod_revision:474 > success:<request_put:<key:\"/registry/minions/multinode-378707-m02\" value_size:2096 >> failure:<request_range:<key:\"/registry/minions/multinode-378707-m02\" > >"}
	{"level":"warn","ts":"2023-09-11T11:20:17.51338Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"317.069863ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/csinodes/multinode-378707-m02\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T11:20:17.513521Z","caller":"traceutil/trace.go:171","msg":"trace[23320814] range","detail":"{range_begin:/registry/csinodes/multinode-378707-m02; range_end:; response_count:0; response_revision:476; }","duration":"317.208093ms","start":"2023-09-11T11:20:17.196306Z","end":"2023-09-11T11:20:17.513514Z","steps":["trace[23320814] 'agreement among raft nodes before linearized reading'  (duration: 317.060146ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T11:20:17.513563Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T11:20:17.196287Z","time spent":"317.267868ms","remote":"127.0.0.1:43988","response type":"/etcdserverpb.KV/Range","request count":0,"request size":41,"response count":0,"response size":28,"request content":"key:\"/registry/csinodes/multinode-378707-m02\" "}
	
	* 
	* ==> kernel <==
	*  11:20:35 up 1 min,  0 users,  load average: 0.87, 0.42, 0.16
	Linux multinode-378707 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [f41490e24fb7c2701c92b7685d8e1c8e2919baabd80781646ad51c696e0866b6] <==
	* I0911 11:19:40.874215       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0911 11:19:40.874369       1 main.go:107] hostIP = 192.168.39.237
	podIP = 192.168.39.237
	I0911 11:19:40.874507       1 main.go:116] setting mtu 1500 for CNI 
	I0911 11:19:40.874535       1 main.go:146] kindnetd IP family: "ipv4"
	I0911 11:19:40.874565       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0911 11:19:41.467814       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0911 11:19:41.468294       1 main.go:227] handling current node
	I0911 11:19:51.483282       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0911 11:19:51.483353       1 main.go:227] handling current node
	I0911 11:20:01.495060       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0911 11:20:01.495246       1 main.go:227] handling current node
	I0911 11:20:11.500219       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0911 11:20:11.500305       1 main.go:227] handling current node
	I0911 11:20:21.512259       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0911 11:20:21.512363       1 main.go:227] handling current node
	I0911 11:20:21.512392       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0911 11:20:21.512410       1 main.go:250] Node multinode-378707-m02 has CIDR [10.244.1.0/24] 
	I0911 11:20:21.513138       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.168.39.220 Flags: [] Table: 0} 
	I0911 11:20:31.527764       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0911 11:20:31.528067       1 main.go:227] handling current node
	I0911 11:20:31.528079       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0911 11:20:31.528086       1 main.go:250] Node multinode-378707-m02 has CIDR [10.244.1.0/24] 
	
	* 
	* ==> kube-apiserver [b09f331133be7dc95bba693aa119ec0fa378d2d29ddefd6988ea9aaa2df0d438] <==
	* I0911 11:19:18.556073       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 11:19:18.563275       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0911 11:19:18.579751       1 controller.go:624] quota admission added evaluator for: namespaces
	I0911 11:19:18.591510       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0911 11:19:18.591607       1 aggregator.go:166] initial CRD sync complete...
	I0911 11:19:18.591639       1 autoregister_controller.go:141] Starting autoregister controller
	I0911 11:19:18.591667       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0911 11:19:18.591695       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:19:18.622397       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:19:18.639697       1 shared_informer.go:318] Caches are synced for configmaps
	I0911 11:19:19.446388       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0911 11:19:19.452018       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0911 11:19:19.452505       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0911 11:19:20.185348       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 11:19:20.241149       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0911 11:19:20.381765       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0911 11:19:20.391692       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.237]
	I0911 11:19:20.393143       1 controller.go:624] quota admission added evaluator for: endpoints
	I0911 11:19:20.398921       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 11:19:20.535389       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0911 11:19:21.843066       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0911 11:19:21.859919       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0911 11:19:21.877269       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0911 11:19:35.403627       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0911 11:19:35.566870       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	
	* 
	* ==> kube-controller-manager [bd3fb9016feafb5488573586633b12d6e2c3a523279727f29eaa0866b0428ee4] <==
	* I0911 11:19:41.632669       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="380.795µs"
	I0911 11:19:41.658462       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.816µs"
	I0911 11:19:43.157639       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="96.284µs"
	I0911 11:19:43.201165       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="20.269015ms"
	I0911 11:19:43.202850       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="306.78µs"
	I0911 11:19:44.753465       1 node_lifecycle_controller.go:1048] "Controller detected that some Nodes are Ready. Exiting master disruption mode"
	I0911 11:20:17.119060       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-378707-m02\" does not exist"
	I0911 11:20:17.521331       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-378707-m02" podCIDRs=["10.244.1.0/24"]
	I0911 11:20:17.537550       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-p8h9v"
	I0911 11:20:17.539882       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-8gcxx"
	I0911 11:20:19.761138       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-378707-m02"
	I0911 11:20:19.761254       1 event.go:307] "Event occurred" object="multinode-378707-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-378707-m02 event: Registered Node multinode-378707-m02 in Controller"
	I0911 11:20:25.129738       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-378707-m02"
	I0911 11:20:27.409928       1 event.go:307] "Event occurred" object="default/busybox" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set busybox-5bc68d56bd to 2"
	I0911 11:20:27.426503       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-f9d7x"
	I0911 11:20:27.450439       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-4jnst"
	I0911 11:20:27.478281       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="68.657722ms"
	I0911 11:20:27.489536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.180429ms"
	I0911 11:20:27.515944       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="26.333501ms"
	I0911 11:20:27.516130       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="57.451µs"
	I0911 11:20:29.777298       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-f9d7x" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-f9d7x"
	I0911 11:20:30.349015       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="9.141388ms"
	I0911 11:20:30.349940       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.172µs"
	I0911 11:20:31.623166       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="11.526604ms"
	I0911 11:20:31.623304       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="46.792µs"
	
	* 
	* ==> kube-proxy [4b9002fc32a5c9ac337e16aee3acaed76ea07c268d0e0ffcd1e4ee2983fa8ed7] <==
	* I0911 11:19:38.376685       1 server_others.go:69] "Using iptables proxy"
	I0911 11:19:38.390433       1 node.go:141] Successfully retrieved node IP: 192.168.39.237
	I0911 11:19:38.442659       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 11:19:38.442793       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 11:19:38.446368       1 server_others.go:152] "Using iptables Proxier"
	I0911 11:19:38.446465       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 11:19:38.446639       1 server.go:846] "Version info" version="v1.28.1"
	I0911 11:19:38.446847       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:19:38.447880       1 config.go:188] "Starting service config controller"
	I0911 11:19:38.447943       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 11:19:38.448068       1 config.go:97] "Starting endpoint slice config controller"
	I0911 11:19:38.448088       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 11:19:38.451659       1 config.go:315] "Starting node config controller"
	I0911 11:19:38.451707       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 11:19:38.548789       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 11:19:38.548893       1 shared_informer.go:318] Caches are synced for service config
	I0911 11:19:38.552066       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [e3361dad963f765917e011a70c279607ea1caa3b974f996ac638de80436d3cce] <==
	* W0911 11:19:18.586067       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 11:19:18.586187       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 11:19:18.586871       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 11:19:18.586920       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0911 11:19:18.586921       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0911 11:19:18.587094       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0911 11:19:18.587085       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 11:19:18.587333       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 11:19:18.587342       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 11:19:18.587485       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0911 11:19:18.587294       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0911 11:19:18.587577       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0911 11:19:19.494093       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 11:19:19.494196       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0911 11:19:19.513457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 11:19:19.513590       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0911 11:19:19.690384       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0911 11:19:19.690480       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0911 11:19:19.692891       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 11:19:19.693050       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0911 11:19:19.867244       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0911 11:19:19.867343       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0911 11:19:20.030370       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 11:19:20.030474       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0911 11:19:22.176295       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 11:18:49 UTC, ends at Mon 2023-09-11 11:20:35 UTC. --
	Sep 11 11:19:36 multinode-378707 kubelet[1278]: I0911 11:19:36.195376    1278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e59da67c-e818-45db-bbcd-db99a4310bf1-lib-modules\") pod \"kindnet-gxpnd\" (UID: \"e59da67c-e818-45db-bbcd-db99a4310bf1\") " pod="kube-system/kindnet-gxpnd"
	Sep 11 11:19:36 multinode-378707 kubelet[1278]: I0911 11:19:36.195403    1278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e59da67c-e818-45db-bbcd-db99a4310bf1-xtables-lock\") pod \"kindnet-gxpnd\" (UID: \"e59da67c-e818-45db-bbcd-db99a4310bf1\") " pod="kube-system/kindnet-gxpnd"
	Sep 11 11:19:36 multinode-378707 kubelet[1278]: I0911 11:19:36.195428    1278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/e59da67c-e818-45db-bbcd-db99a4310bf1-cni-cfg\") pod \"kindnet-gxpnd\" (UID: \"e59da67c-e818-45db-bbcd-db99a4310bf1\") " pod="kube-system/kindnet-gxpnd"
	Sep 11 11:19:36 multinode-378707 kubelet[1278]: E0911 11:19:36.793097    1278 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition
	Sep 11 11:19:36 multinode-378707 kubelet[1278]: E0911 11:19:36.793324    1278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c3bb9995-3cd6-4433-a326-3da0a7f4aff3-kube-proxy podName:c3bb9995-3cd6-4433-a326-3da0a7f4aff3 nodeName:}" failed. No retries permitted until 2023-09-11 11:19:37.293226602 +0000 UTC m=+15.471677614 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/c3bb9995-3cd6-4433-a326-3da0a7f4aff3-kube-proxy") pod "kube-proxy-snbc8" (UID: "c3bb9995-3cd6-4433-a326-3da0a7f4aff3") : failed to sync configmap cache: timed out waiting for the condition
	Sep 11 11:19:37 multinode-378707 kubelet[1278]: E0911 11:19:37.193466    1278 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition
	Sep 11 11:19:37 multinode-378707 kubelet[1278]: E0911 11:19:37.193510    1278 projected.go:198] Error preparing data for projected volume kube-api-access-2c5x5 for pod kube-system/kube-proxy-snbc8: failed to sync configmap cache: timed out waiting for the condition
	Sep 11 11:19:37 multinode-378707 kubelet[1278]: E0911 11:19:37.193577    1278 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/c3bb9995-3cd6-4433-a326-3da0a7f4aff3-kube-api-access-2c5x5 podName:c3bb9995-3cd6-4433-a326-3da0a7f4aff3 nodeName:}" failed. No retries permitted until 2023-09-11 11:19:37.693560102 +0000 UTC m=+15.872011119 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-2c5x5" (UniqueName: "kubernetes.io/projected/c3bb9995-3cd6-4433-a326-3da0a7f4aff3-kube-api-access-2c5x5") pod "kube-proxy-snbc8" (UID: "c3bb9995-3cd6-4433-a326-3da0a7f4aff3") : failed to sync configmap cache: timed out waiting for the condition
	Sep 11 11:19:41 multinode-378707 kubelet[1278]: I0911 11:19:41.141268    1278 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-snbc8" podStartSLOduration=6.141223339 podCreationTimestamp="2023-09-11 11:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 11:19:39.137159479 +0000 UTC m=+17.315610504" watchObservedRunningTime="2023-09-11 11:19:41.141223339 +0000 UTC m=+19.319674364"
	Sep 11 11:19:41 multinode-378707 kubelet[1278]: I0911 11:19:41.584372    1278 kubelet_node_status.go:493] "Fast updating node status as it just became ready"
	Sep 11 11:19:41 multinode-378707 kubelet[1278]: I0911 11:19:41.627712    1278 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kindnet-gxpnd" podStartSLOduration=6.627668262 podCreationTimestamp="2023-09-11 11:19:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 11:19:41.14178654 +0000 UTC m=+19.320237563" watchObservedRunningTime="2023-09-11 11:19:41.627668262 +0000 UTC m=+19.806119279"
	Sep 11 11:19:41 multinode-378707 kubelet[1278]: I0911 11:19:41.628132    1278 topology_manager.go:215] "Topology Admit Handler" podUID="f72f6ba0-92a3-4108-a37f-e6ad5009c37c" podNamespace="kube-system" podName="coredns-5dd5756b68-fzpjk"
	Sep 11 11:19:41 multinode-378707 kubelet[1278]: I0911 11:19:41.634474    1278 topology_manager.go:215] "Topology Admit Handler" podUID="77e1a93d-fc34-4f05-8320-169bb6c93e46" podNamespace="kube-system" podName="storage-provisioner"
	Sep 11 11:19:41 multinode-378707 kubelet[1278]: I0911 11:19:41.735283    1278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-788q2\" (UniqueName: \"kubernetes.io/projected/77e1a93d-fc34-4f05-8320-169bb6c93e46-kube-api-access-788q2\") pod \"storage-provisioner\" (UID: \"77e1a93d-fc34-4f05-8320-169bb6c93e46\") " pod="kube-system/storage-provisioner"
	Sep 11 11:19:41 multinode-378707 kubelet[1278]: I0911 11:19:41.735335    1278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f72f6ba0-92a3-4108-a37f-e6ad5009c37c-config-volume\") pod \"coredns-5dd5756b68-fzpjk\" (UID: \"f72f6ba0-92a3-4108-a37f-e6ad5009c37c\") " pod="kube-system/coredns-5dd5756b68-fzpjk"
	Sep 11 11:19:41 multinode-378707 kubelet[1278]: I0911 11:19:41.735362    1278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9mzx\" (UniqueName: \"kubernetes.io/projected/f72f6ba0-92a3-4108-a37f-e6ad5009c37c-kube-api-access-z9mzx\") pod \"coredns-5dd5756b68-fzpjk\" (UID: \"f72f6ba0-92a3-4108-a37f-e6ad5009c37c\") " pod="kube-system/coredns-5dd5756b68-fzpjk"
	Sep 11 11:19:41 multinode-378707 kubelet[1278]: I0911 11:19:41.735385    1278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/77e1a93d-fc34-4f05-8320-169bb6c93e46-tmp\") pod \"storage-provisioner\" (UID: \"77e1a93d-fc34-4f05-8320-169bb6c93e46\") " pod="kube-system/storage-provisioner"
	Sep 11 11:19:43 multinode-378707 kubelet[1278]: I0911 11:19:43.178346    1278 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-fzpjk" podStartSLOduration=7.178305415 podCreationTimestamp="2023-09-11 11:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 11:19:43.156463508 +0000 UTC m=+21.334914533" watchObservedRunningTime="2023-09-11 11:19:43.178305415 +0000 UTC m=+21.356756440"
	Sep 11 11:20:22 multinode-378707 kubelet[1278]: E0911 11:20:22.034705    1278 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 11:20:22 multinode-378707 kubelet[1278]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 11:20:22 multinode-378707 kubelet[1278]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 11:20:22 multinode-378707 kubelet[1278]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 11:20:27 multinode-378707 kubelet[1278]: I0911 11:20:27.466412    1278 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=51.466323785 podCreationTimestamp="2023-09-11 11:19:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-09-11 11:19:43.205636437 +0000 UTC m=+21.384087463" watchObservedRunningTime="2023-09-11 11:20:27.466323785 +0000 UTC m=+65.644774833"
	Sep 11 11:20:27 multinode-378707 kubelet[1278]: I0911 11:20:27.466772    1278 topology_manager.go:215] "Topology Admit Handler" podUID="6e7ad0e9-a68b-4dab-a3bd-c91300933bb8" podNamespace="default" podName="busybox-5bc68d56bd-4jnst"
	Sep 11 11:20:27 multinode-378707 kubelet[1278]: I0911 11:20:27.499532    1278 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-682vf\" (UniqueName: \"kubernetes.io/projected/6e7ad0e9-a68b-4dab-a3bd-c91300933bb8-kube-api-access-682vf\") pod \"busybox-5bc68d56bd-4jnst\" (UID: \"6e7ad0e9-a68b-4dab-a3bd-c91300933bb8\") " pod="default/busybox-5bc68d56bd-4jnst"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-378707 -n multinode-378707
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-378707 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (3.34s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (690.24s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:283: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-378707
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-378707
E0911 11:23:47.570352 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
multinode_test.go:290: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-378707: exit status 82 (2m1.623667129s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-378707"  ...
	* Stopping node "multinode-378707"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:292: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-378707" : exit status 82
multinode_test.go:295: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-378707 --wait=true -v=8 --alsologtostderr
E0911 11:24:15.053606 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:25:38.104733 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:26:22.844142 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:28:47.569441 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:29:15.053330 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:30:10.618026 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:31:22.842497 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:32:45.891401 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
multinode_test.go:295: (dbg) Done: out/minikube-linux-amd64 start -p multinode-378707 --wait=true -v=8 --alsologtostderr: (9m25.329496406s)
multinode_test.go:300: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-378707
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-378707 -n multinode-378707
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-378707 logs -n 25: (1.72592678s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-378707 ssh -n                                                                 | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-378707 cp multinode-378707-m02:/home/docker/cp-test.txt                       | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile813539875/001/cp-test_multinode-378707-m02.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n                                                                 | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-378707 cp multinode-378707-m02:/home/docker/cp-test.txt                       | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707:/home/docker/cp-test_multinode-378707-m02_multinode-378707.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n                                                                 | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n multinode-378707 sudo cat                                       | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | /home/docker/cp-test_multinode-378707-m02_multinode-378707.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-378707 cp multinode-378707-m02:/home/docker/cp-test.txt                       | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m03:/home/docker/cp-test_multinode-378707-m02_multinode-378707-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n                                                                 | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n multinode-378707-m03 sudo cat                                   | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | /home/docker/cp-test_multinode-378707-m02_multinode-378707-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-378707 cp testdata/cp-test.txt                                                | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n                                                                 | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-378707 cp multinode-378707-m03:/home/docker/cp-test.txt                       | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile813539875/001/cp-test_multinode-378707-m03.txt          |                  |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n                                                                 | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-378707 cp multinode-378707-m03:/home/docker/cp-test.txt                       | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707:/home/docker/cp-test_multinode-378707-m03_multinode-378707.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n                                                                 | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n multinode-378707 sudo cat                                       | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | /home/docker/cp-test_multinode-378707-m03_multinode-378707.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-378707 cp multinode-378707-m03:/home/docker/cp-test.txt                       | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m02:/home/docker/cp-test_multinode-378707-m03_multinode-378707-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n                                                                 | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n multinode-378707-m02 sudo cat                                   | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | /home/docker/cp-test_multinode-378707-m03_multinode-378707-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-378707 node stop m03                                                          | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	| node    | multinode-378707 node start                                                             | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:22 UTC |
	|         | m03 --alsologtostderr                                                                   |                  |         |         |                     |                     |
	| node    | list -p multinode-378707                                                                | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:22 UTC |                     |
	| stop    | -p multinode-378707                                                                     | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:22 UTC |                     |
	| start   | -p multinode-378707                                                                     | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:24 UTC | 11 Sep 23 11:33 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-378707                                                                | multinode-378707 | jenkins | v1.31.2 | 11 Sep 23 11:33 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:24:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:24:02.789214 2238380 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:24:02.789380 2238380 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:24:02.789391 2238380 out.go:309] Setting ErrFile to fd 2...
	I0911 11:24:02.789398 2238380 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:24:02.789596 2238380 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:24:02.790253 2238380 out.go:303] Setting JSON to false
	I0911 11:24:02.791170 2238380 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":234394,"bootTime":1694197049,"procs":198,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:24:02.791244 2238380 start.go:138] virtualization: kvm guest
	I0911 11:24:02.794011 2238380 out.go:177] * [multinode-378707] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:24:02.795569 2238380 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:24:02.796916 2238380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:24:02.795647 2238380 notify.go:220] Checking for updates...
	I0911 11:24:02.799720 2238380 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:24:02.801102 2238380 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:24:02.802456 2238380 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:24:02.803945 2238380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:24:02.806926 2238380 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:24:02.807139 2238380 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:24:02.807556 2238380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:24:02.807614 2238380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:24:02.822802 2238380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33971
	I0911 11:24:02.823237 2238380 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:24:02.823804 2238380 main.go:141] libmachine: Using API Version  1
	I0911 11:24:02.823822 2238380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:24:02.824187 2238380 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:24:02.824381 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:24:02.861946 2238380 out.go:177] * Using the kvm2 driver based on existing profile
	I0911 11:24:02.863278 2238380 start.go:298] selected driver: kvm2
	I0911 11:24:02.863290 2238380 start.go:902] validating driver "kvm2" against &{Name:multinode-378707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:
{KubernetesVersion:v1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:fal
se ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: So
cketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:24:02.863493 2238380 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:24:02.863803 2238380 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:24:02.863901 2238380 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 11:24:02.879096 2238380 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 11:24:02.880133 2238380 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 11:24:02.880193 2238380 cni.go:84] Creating CNI manager for ""
	I0911 11:24:02.880213 2238380 cni.go:136] 3 nodes found, recommending kindnet
	I0911 11:24:02.880221 2238380 start_flags.go:321] config:
	{Name:multinode-378707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:multinode-378707 Namespace:default APIS
erverName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-pr
ovisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 Aut
oPauseInterval:1m0s}
	I0911 11:24:02.880587 2238380 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:24:02.882669 2238380 out.go:177] * Starting control plane node multinode-378707 in cluster multinode-378707
	I0911 11:24:02.884201 2238380 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:24:02.884242 2238380 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 11:24:02.884253 2238380 cache.go:57] Caching tarball of preloaded images
	I0911 11:24:02.884328 2238380 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 11:24:02.884338 2238380 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 11:24:02.884477 2238380 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/config.json ...
	I0911 11:24:02.884685 2238380 start.go:365] acquiring machines lock for multinode-378707: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 11:24:02.884728 2238380 start.go:369] acquired machines lock for "multinode-378707" in 23.071µs
	I0911 11:24:02.884744 2238380 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:24:02.884755 2238380 fix.go:54] fixHost starting: 
	I0911 11:24:02.885053 2238380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:24:02.885089 2238380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:24:02.900223 2238380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38719
	I0911 11:24:02.900696 2238380 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:24:02.901264 2238380 main.go:141] libmachine: Using API Version  1
	I0911 11:24:02.901291 2238380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:24:02.901615 2238380 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:24:02.901809 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:24:02.901948 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetState
	I0911 11:24:02.903633 2238380 fix.go:102] recreateIfNeeded on multinode-378707: state=Running err=<nil>
	W0911 11:24:02.903653 2238380 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:24:02.905746 2238380 out.go:177] * Updating the running kvm2 "multinode-378707" VM ...
	I0911 11:24:02.907392 2238380 machine.go:88] provisioning docker machine ...
	I0911 11:24:02.907409 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:24:02.907608 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetMachineName
	I0911 11:24:02.907785 2238380 buildroot.go:166] provisioning hostname "multinode-378707"
	I0911 11:24:02.907811 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetMachineName
	I0911 11:24:02.907934 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:24:02.910671 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:24:02.911128 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:24:02.911161 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:24:02.911316 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:24:02.911513 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:24:02.911659 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:24:02.911782 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:24:02.911920 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:24:02.912392 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0911 11:24:02.912413 2238380 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-378707 && echo "multinode-378707" | sudo tee /etc/hostname
	I0911 11:24:21.285147 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:24:27.365110 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:24:30.437074 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:24:36.517091 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:24:39.589147 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:24:45.669124 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:24:48.741085 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:24:54.821143 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:24:57.893144 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:03.973173 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:07.045133 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:13.125145 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:16.197095 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:22.277112 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:25.349166 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:31.429149 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:34.501125 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:40.581127 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:43.653220 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:49.733157 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:52.805124 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:25:58.885105 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:01.957060 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:08.037143 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:11.109122 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:17.189090 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:20.261156 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:26.341163 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:29.413164 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:35.493169 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:38.565104 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:44.645160 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:47.717162 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:53.797132 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:26:56.869102 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:02.949096 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:06.021134 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:12.101118 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:15.173068 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:21.253109 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:24.325112 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:30.405114 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:33.477136 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:39.557104 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:42.629084 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:48.709190 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:51.781105 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:27:57.861126 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:00.933086 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:07.013161 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:10.085131 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:16.165144 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:19.237162 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:25.317243 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:28.389116 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:34.469148 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:37.541129 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:43.621195 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:46.693133 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:52.773111 2238380 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.237:22: connect: no route to host
	I0911 11:28:55.775588 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:28:55.775629 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:28:55.777850 2238380 machine.go:91] provisioned docker machine in 4m52.870435484s
	I0911 11:28:55.777901 2238380 fix.go:56] fixHost completed within 4m52.893151958s
	I0911 11:28:55.777939 2238380 start.go:83] releasing machines lock for "multinode-378707", held for 4m52.893200053s
	W0911 11:28:55.777961 2238380 start.go:672] error starting host: provision: host is not running
	W0911 11:28:55.778107 2238380 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0911 11:28:55.778118 2238380 start.go:687] Will try again in 5 seconds ...
	I0911 11:29:00.779720 2238380 start.go:365] acquiring machines lock for multinode-378707: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 11:29:00.779892 2238380 start.go:369] acquired machines lock for "multinode-378707" in 106.909µs
	I0911 11:29:00.779921 2238380 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:29:00.779929 2238380 fix.go:54] fixHost starting: 
	I0911 11:29:00.780368 2238380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:29:00.780395 2238380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:29:00.796408 2238380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44639
	I0911 11:29:00.796943 2238380 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:29:00.797524 2238380 main.go:141] libmachine: Using API Version  1
	I0911 11:29:00.797550 2238380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:29:00.797979 2238380 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:29:00.798179 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:29:00.798386 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetState
	I0911 11:29:00.800494 2238380 fix.go:102] recreateIfNeeded on multinode-378707: state=Stopped err=<nil>
	I0911 11:29:00.800538 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	W0911 11:29:00.800741 2238380 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:29:00.804066 2238380 out.go:177] * Restarting existing kvm2 VM for "multinode-378707" ...
	I0911 11:29:00.805503 2238380 main.go:141] libmachine: (multinode-378707) Calling .Start
	I0911 11:29:00.805745 2238380 main.go:141] libmachine: (multinode-378707) Ensuring networks are active...
	I0911 11:29:00.806668 2238380 main.go:141] libmachine: (multinode-378707) Ensuring network default is active
	I0911 11:29:00.807070 2238380 main.go:141] libmachine: (multinode-378707) Ensuring network mk-multinode-378707 is active
	I0911 11:29:00.807450 2238380 main.go:141] libmachine: (multinode-378707) Getting domain xml...
	I0911 11:29:00.808212 2238380 main.go:141] libmachine: (multinode-378707) Creating domain...
	I0911 11:29:02.058713 2238380 main.go:141] libmachine: (multinode-378707) Waiting to get IP...
	I0911 11:29:02.059654 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:02.060182 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:02.060262 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:02.060174 2239181 retry.go:31] will retry after 244.193245ms: waiting for machine to come up
	I0911 11:29:02.305970 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:02.306918 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:02.306967 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:02.306904 2239181 retry.go:31] will retry after 294.62728ms: waiting for machine to come up
	I0911 11:29:02.603574 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:02.604246 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:02.604273 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:02.604180 2239181 retry.go:31] will retry after 468.457411ms: waiting for machine to come up
	I0911 11:29:03.074062 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:03.074606 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:03.074695 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:03.074548 2239181 retry.go:31] will retry after 398.767953ms: waiting for machine to come up
	I0911 11:29:03.474989 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:03.475512 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:03.475545 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:03.475452 2239181 retry.go:31] will retry after 617.003283ms: waiting for machine to come up
	I0911 11:29:04.094549 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:04.094992 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:04.095024 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:04.094932 2239181 retry.go:31] will retry after 816.544185ms: waiting for machine to come up
	I0911 11:29:04.912947 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:04.913348 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:04.913400 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:04.913290 2239181 retry.go:31] will retry after 1.02790009s: waiting for machine to come up
	I0911 11:29:05.942614 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:05.943043 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:05.943070 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:05.942990 2239181 retry.go:31] will retry after 1.187567479s: waiting for machine to come up
	I0911 11:29:07.132451 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:07.132855 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:07.132878 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:07.132801 2239181 retry.go:31] will retry after 1.187577734s: waiting for machine to come up
	I0911 11:29:08.322088 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:08.322755 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:08.322785 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:08.322667 2239181 retry.go:31] will retry after 2.271608887s: waiting for machine to come up
	I0911 11:29:10.596619 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:10.597196 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:10.597228 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:10.597118 2239181 retry.go:31] will retry after 2.725327116s: waiting for machine to come up
	I0911 11:29:13.325706 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:13.326163 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:13.326199 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:13.326094 2239181 retry.go:31] will retry after 3.023597557s: waiting for machine to come up
	I0911 11:29:16.351419 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:16.351840 2238380 main.go:141] libmachine: (multinode-378707) DBG | unable to find current IP address of domain multinode-378707 in network mk-multinode-378707
	I0911 11:29:16.351866 2238380 main.go:141] libmachine: (multinode-378707) DBG | I0911 11:29:16.351788 2239181 retry.go:31] will retry after 4.135097948s: waiting for machine to come up
	I0911 11:29:20.491084 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.491533 2238380 main.go:141] libmachine: (multinode-378707) Found IP for machine: 192.168.39.237
	I0911 11:29:20.491566 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has current primary IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.491573 2238380 main.go:141] libmachine: (multinode-378707) Reserving static IP address...
	I0911 11:29:20.491988 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "multinode-378707", mac: "52:54:00:57:31:1a", ip: "192.168.39.237"} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:20.492032 2238380 main.go:141] libmachine: (multinode-378707) DBG | skip adding static IP to network mk-multinode-378707 - found existing host DHCP lease matching {name: "multinode-378707", mac: "52:54:00:57:31:1a", ip: "192.168.39.237"}
	I0911 11:29:20.492049 2238380 main.go:141] libmachine: (multinode-378707) Reserved static IP address: 192.168.39.237
	I0911 11:29:20.492065 2238380 main.go:141] libmachine: (multinode-378707) Waiting for SSH to be available...
	I0911 11:29:20.492080 2238380 main.go:141] libmachine: (multinode-378707) DBG | Getting to WaitForSSH function...
	I0911 11:29:20.494124 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.494516 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:20.494556 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.494746 2238380 main.go:141] libmachine: (multinode-378707) DBG | Using SSH client type: external
	I0911 11:29:20.494780 2238380 main.go:141] libmachine: (multinode-378707) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa (-rw-------)
	I0911 11:29:20.494811 2238380 main.go:141] libmachine: (multinode-378707) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.237 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 11:29:20.494837 2238380 main.go:141] libmachine: (multinode-378707) DBG | About to run SSH command:
	I0911 11:29:20.494873 2238380 main.go:141] libmachine: (multinode-378707) DBG | exit 0
	I0911 11:29:20.585013 2238380 main.go:141] libmachine: (multinode-378707) DBG | SSH cmd err, output: <nil>: 
	I0911 11:29:20.585473 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetConfigRaw
	I0911 11:29:20.586234 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetIP
	I0911 11:29:20.589154 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.589536 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:20.589591 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.589925 2238380 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/config.json ...
	I0911 11:29:20.590299 2238380 machine.go:88] provisioning docker machine ...
	I0911 11:29:20.590332 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:29:20.590586 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetMachineName
	I0911 11:29:20.590774 2238380 buildroot.go:166] provisioning hostname "multinode-378707"
	I0911 11:29:20.590798 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetMachineName
	I0911 11:29:20.590998 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:29:20.593550 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.594179 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:20.594211 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.594409 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:29:20.594603 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:29:20.594802 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:29:20.594914 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:29:20.595071 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:29:20.595723 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0911 11:29:20.595743 2238380 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-378707 && echo "multinode-378707" | sudo tee /etc/hostname
	I0911 11:29:20.722959 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-378707
	
	I0911 11:29:20.723010 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:29:20.725901 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.726340 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:20.726375 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.726539 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:29:20.726757 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:29:20.726935 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:29:20.727099 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:29:20.727277 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:29:20.727723 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0911 11:29:20.727756 2238380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-378707' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-378707/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-378707' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:29:20.851753 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:29:20.851797 2238380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 11:29:20.851827 2238380 buildroot.go:174] setting up certificates
	I0911 11:29:20.851883 2238380 provision.go:83] configureAuth start
	I0911 11:29:20.851899 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetMachineName
	I0911 11:29:20.852261 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetIP
	I0911 11:29:20.855358 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.855756 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:20.855797 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.856114 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:29:20.858530 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.858944 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:20.858979 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:20.859154 2238380 provision.go:138] copyHostCerts
	I0911 11:29:20.859193 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:29:20.859235 2238380 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 11:29:20.859247 2238380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:29:20.859342 2238380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 11:29:20.859455 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:29:20.859486 2238380 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 11:29:20.859495 2238380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:29:20.859533 2238380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 11:29:20.859597 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:29:20.859620 2238380 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 11:29:20.859629 2238380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:29:20.859664 2238380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 11:29:20.859740 2238380 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.multinode-378707 san=[192.168.39.237 192.168.39.237 localhost 127.0.0.1 minikube multinode-378707]
	I0911 11:29:21.301596 2238380 provision.go:172] copyRemoteCerts
	I0911 11:29:21.301659 2238380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:29:21.301690 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:29:21.304604 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:21.305194 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:21.305228 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:21.305527 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:29:21.305779 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:29:21.305987 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:29:21.306112 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:29:21.399919 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0911 11:29:21.400001 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0911 11:29:21.428891 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0911 11:29:21.428976 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 11:29:21.457623 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0911 11:29:21.457725 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:29:21.487032 2238380 provision.go:86] duration metric: configureAuth took 635.114132ms
	I0911 11:29:21.487065 2238380 buildroot.go:189] setting minikube options for container-runtime
	I0911 11:29:21.487303 2238380 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:29:21.487510 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:29:21.490551 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:21.491045 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:21.491090 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:21.491201 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:29:21.491407 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:29:21.491593 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:29:21.491756 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:29:21.491937 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:29:21.492444 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0911 11:29:21.492462 2238380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:29:21.821001 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:29:21.821042 2238380 machine.go:91] provisioned docker machine in 1.230720916s
	I0911 11:29:21.821056 2238380 start.go:300] post-start starting for "multinode-378707" (driver="kvm2")
	I0911 11:29:21.821070 2238380 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:29:21.821094 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:29:21.821477 2238380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:29:21.821526 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:29:21.824805 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:21.825326 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:21.825366 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:21.825538 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:29:21.825770 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:29:21.825973 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:29:21.826139 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:29:21.916474 2238380 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:29:21.920926 2238380 command_runner.go:130] > NAME=Buildroot
	I0911 11:29:21.920952 2238380 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0911 11:29:21.920958 2238380 command_runner.go:130] > ID=buildroot
	I0911 11:29:21.920967 2238380 command_runner.go:130] > VERSION_ID=2021.02.12
	I0911 11:29:21.920973 2238380 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0911 11:29:21.921037 2238380 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 11:29:21.921059 2238380 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 11:29:21.921177 2238380 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 11:29:21.921288 2238380 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 11:29:21.921304 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> /etc/ssl/certs/22224712.pem
	I0911 11:29:21.921397 2238380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:29:21.931021 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:29:21.952942 2238380 start.go:303] post-start completed in 131.868025ms
	I0911 11:29:21.952978 2238380 fix.go:56] fixHost completed within 21.17304717s
	I0911 11:29:21.953008 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:29:21.955409 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:21.955882 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:21.955918 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:21.956147 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:29:21.956376 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:29:21.956580 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:29:21.956761 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:29:21.956955 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:29:21.957428 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.237 22 <nil> <nil>}
	I0911 11:29:21.957441 2238380 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 11:29:22.077710 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694431762.022997927
	
	I0911 11:29:22.077741 2238380 fix.go:206] guest clock: 1694431762.022997927
	I0911 11:29:22.077753 2238380 fix.go:219] Guest: 2023-09-11 11:29:22.022997927 +0000 UTC Remote: 2023-09-11 11:29:21.952982749 +0000 UTC m=+319.201542631 (delta=70.015178ms)
	I0911 11:29:22.077795 2238380 fix.go:190] guest clock delta is within tolerance: 70.015178ms
	I0911 11:29:22.077806 2238380 start.go:83] releasing machines lock for "multinode-378707", held for 21.297900461s
	I0911 11:29:22.077841 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:29:22.078174 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetIP
	I0911 11:29:22.080882 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:22.081335 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:22.081388 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:22.081528 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:29:22.082093 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:29:22.082286 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:29:22.082390 2238380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:29:22.082452 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:29:22.082531 2238380 ssh_runner.go:195] Run: cat /version.json
	I0911 11:29:22.082579 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:29:22.085380 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:22.085610 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:22.085814 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:22.085843 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:22.085983 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:22.086009 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:29:22.086018 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:22.086224 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:29:22.086241 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:29:22.086436 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:29:22.086606 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:29:22.086613 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:29:22.086744 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:29:22.086897 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:29:22.191867 2238380 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0911 11:29:22.192927 2238380 command_runner.go:130] > {"iso_version": "v1.31.0-1692872107-17120", "kicbase_version": "v0.0.40-1692613578-17086", "minikube_version": "v1.31.2", "commit": "9dc31f0284dc1a8a35859648c60120733f0f8296"}
	I0911 11:29:22.193165 2238380 ssh_runner.go:195] Run: systemctl --version
	I0911 11:29:22.198634 2238380 command_runner.go:130] > systemd 247 (247)
	I0911 11:29:22.198683 2238380 command_runner.go:130] > -PAM -AUDIT -SELINUX -IMA -APPARMOR -SMACK -SYSVINIT -UTMP -LIBCRYPTSETUP -GCRYPT -GNUTLS +ACL +XZ +LZ4 -ZSTD +SECCOMP +BLKID -ELFUTILS +KMOD -IDN2 -IDN -PCRE2 default-hierarchy=hybrid
	I0911 11:29:22.198981 2238380 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:29:22.347677 2238380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:29:22.354574 2238380 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0911 11:29:22.354984 2238380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 11:29:22.355063 2238380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:29:22.370415 2238380 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0911 11:29:22.370513 2238380 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 11:29:22.370536 2238380 start.go:466] detecting cgroup driver to use...
	I0911 11:29:22.370608 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:29:22.384976 2238380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:29:22.397806 2238380 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:29:22.397891 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:29:22.410586 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:29:22.424524 2238380 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:29:22.438240 2238380 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/cri-docker.socket.
	I0911 11:29:22.530864 2238380 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:29:22.544427 2238380 command_runner.go:130] ! Created symlink /etc/systemd/system/cri-docker.service → /dev/null.
	I0911 11:29:22.650340 2238380 docker.go:212] disabling docker service ...
	I0911 11:29:22.650456 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:29:22.664295 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:29:22.676777 2238380 command_runner.go:130] ! Failed to stop docker.service: Unit docker.service not loaded.
	I0911 11:29:22.676902 2238380 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:29:22.787549 2238380 command_runner.go:130] ! Removed /etc/systemd/system/sockets.target.wants/docker.socket.
	I0911 11:29:22.787641 2238380 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:29:22.911848 2238380 command_runner.go:130] ! Unit docker.service does not exist, proceeding anyway.
	I0911 11:29:22.911885 2238380 command_runner.go:130] ! Created symlink /etc/systemd/system/docker.service → /dev/null.
	I0911 11:29:22.911955 2238380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:29:22.925750 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:29:22.944723 2238380 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0911 11:29:22.944765 2238380 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:29:22.944847 2238380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:29:22.953808 2238380 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:29:22.953879 2238380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:29:22.963123 2238380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:29:22.971974 2238380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:29:22.981145 2238380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:29:22.990739 2238380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:29:22.999085 2238380 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 11:29:22.999158 2238380 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 11:29:22.999205 2238380 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 11:29:23.012268 2238380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:29:23.021094 2238380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:29:23.139615 2238380 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:29:23.309291 2238380 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:29:23.309373 2238380 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:29:23.314763 2238380 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0911 11:29:23.314789 2238380 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0911 11:29:23.314795 2238380 command_runner.go:130] > Device: 16h/22d	Inode: 725         Links: 1
	I0911 11:29:23.314802 2238380 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:29:23.314807 2238380 command_runner.go:130] > Access: 2023-09-11 11:29:23.239855626 +0000
	I0911 11:29:23.314814 2238380 command_runner.go:130] > Modify: 2023-09-11 11:29:23.239855626 +0000
	I0911 11:29:23.314818 2238380 command_runner.go:130] > Change: 2023-09-11 11:29:23.239855626 +0000
	I0911 11:29:23.314822 2238380 command_runner.go:130] >  Birth: -
	I0911 11:29:23.315064 2238380 start.go:534] Will wait 60s for crictl version
	I0911 11:29:23.315132 2238380 ssh_runner.go:195] Run: which crictl
	I0911 11:29:23.318980 2238380 command_runner.go:130] > /usr/bin/crictl
	I0911 11:29:23.319103 2238380 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:29:23.353791 2238380 command_runner.go:130] > Version:  0.1.0
	I0911 11:29:23.353821 2238380 command_runner.go:130] > RuntimeName:  cri-o
	I0911 11:29:23.353825 2238380 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0911 11:29:23.353830 2238380 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0911 11:29:23.353910 2238380 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 11:29:23.353992 2238380 ssh_runner.go:195] Run: crio --version
	I0911 11:29:23.403836 2238380 command_runner.go:130] > crio version 1.24.1
	I0911 11:29:23.403864 2238380 command_runner.go:130] > Version:          1.24.1
	I0911 11:29:23.403874 2238380 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0911 11:29:23.403881 2238380 command_runner.go:130] > GitTreeState:     dirty
	I0911 11:29:23.403891 2238380 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0911 11:29:23.403898 2238380 command_runner.go:130] > GoVersion:        go1.19.9
	I0911 11:29:23.403904 2238380 command_runner.go:130] > Compiler:         gc
	I0911 11:29:23.403911 2238380 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:29:23.403918 2238380 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:29:23.403927 2238380 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:29:23.403934 2238380 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:29:23.403943 2238380 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:29:23.404037 2238380 ssh_runner.go:195] Run: crio --version
	I0911 11:29:23.448585 2238380 command_runner.go:130] > crio version 1.24.1
	I0911 11:29:23.448614 2238380 command_runner.go:130] > Version:          1.24.1
	I0911 11:29:23.448628 2238380 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0911 11:29:23.448635 2238380 command_runner.go:130] > GitTreeState:     dirty
	I0911 11:29:23.448651 2238380 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0911 11:29:23.448658 2238380 command_runner.go:130] > GoVersion:        go1.19.9
	I0911 11:29:23.448663 2238380 command_runner.go:130] > Compiler:         gc
	I0911 11:29:23.448669 2238380 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:29:23.448679 2238380 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:29:23.448691 2238380 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:29:23.448699 2238380 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:29:23.448709 2238380 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:29:23.452150 2238380 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 11:29:23.453592 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetIP
	I0911 11:29:23.456604 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:23.457059 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:29:23.457090 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:29:23.457421 2238380 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 11:29:23.461634 2238380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:29:23.475143 2238380 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:29:23.475221 2238380 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:29:23.504650 2238380 command_runner.go:130] > {
	I0911 11:29:23.504681 2238380 command_runner.go:130] >   "images": [
	I0911 11:29:23.504687 2238380 command_runner.go:130] >     {
	I0911 11:29:23.504698 2238380 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0911 11:29:23.504704 2238380 command_runner.go:130] >       "repoTags": [
	I0911 11:29:23.504713 2238380 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0911 11:29:23.504718 2238380 command_runner.go:130] >       ],
	I0911 11:29:23.504724 2238380 command_runner.go:130] >       "repoDigests": [
	I0911 11:29:23.504735 2238380 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0911 11:29:23.504750 2238380 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0911 11:29:23.504759 2238380 command_runner.go:130] >       ],
	I0911 11:29:23.504767 2238380 command_runner.go:130] >       "size": "750414",
	I0911 11:29:23.504776 2238380 command_runner.go:130] >       "uid": {
	I0911 11:29:23.504786 2238380 command_runner.go:130] >         "value": "65535"
	I0911 11:29:23.504844 2238380 command_runner.go:130] >       },
	I0911 11:29:23.504866 2238380 command_runner.go:130] >       "username": "",
	I0911 11:29:23.504880 2238380 command_runner.go:130] >       "spec": null
	I0911 11:29:23.504888 2238380 command_runner.go:130] >     }
	I0911 11:29:23.504897 2238380 command_runner.go:130] >   ]
	I0911 11:29:23.504904 2238380 command_runner.go:130] > }
	I0911 11:29:23.505921 2238380 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 11:29:23.506010 2238380 ssh_runner.go:195] Run: which lz4
	I0911 11:29:23.510177 2238380 command_runner.go:130] > /usr/bin/lz4
	I0911 11:29:23.510207 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0911 11:29:23.510306 2238380 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 11:29:23.514422 2238380 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 11:29:23.514570 2238380 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 11:29:23.514608 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 11:29:25.359807 2238380 crio.go:444] Took 1.849531 seconds to copy over tarball
	I0911 11:29:25.359905 2238380 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 11:29:28.283378 2238380 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (2.923435041s)
	I0911 11:29:28.283421 2238380 crio.go:451] Took 2.923580 seconds to extract the tarball
	I0911 11:29:28.283436 2238380 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 11:29:28.324566 2238380 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:29:28.365553 2238380 command_runner.go:130] > {
	I0911 11:29:28.365580 2238380 command_runner.go:130] >   "images": [
	I0911 11:29:28.365585 2238380 command_runner.go:130] >     {
	I0911 11:29:28.365593 2238380 command_runner.go:130] >       "id": "b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da",
	I0911 11:29:28.365597 2238380 command_runner.go:130] >       "repoTags": [
	I0911 11:29:28.365603 2238380 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230511-dc714da8"
	I0911 11:29:28.365607 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.365611 2238380 command_runner.go:130] >       "repoDigests": [
	I0911 11:29:28.365619 2238380 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974",
	I0911 11:29:28.365626 2238380 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"
	I0911 11:29:28.365629 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.365634 2238380 command_runner.go:130] >       "size": "65249302",
	I0911 11:29:28.365645 2238380 command_runner.go:130] >       "uid": null,
	I0911 11:29:28.365649 2238380 command_runner.go:130] >       "username": "",
	I0911 11:29:28.365658 2238380 command_runner.go:130] >       "spec": null
	I0911 11:29:28.365662 2238380 command_runner.go:130] >     },
	I0911 11:29:28.365665 2238380 command_runner.go:130] >     {
	I0911 11:29:28.365671 2238380 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0911 11:29:28.365675 2238380 command_runner.go:130] >       "repoTags": [
	I0911 11:29:28.365680 2238380 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0911 11:29:28.365684 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.365687 2238380 command_runner.go:130] >       "repoDigests": [
	I0911 11:29:28.365694 2238380 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0911 11:29:28.365701 2238380 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0911 11:29:28.365705 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.365708 2238380 command_runner.go:130] >       "size": "31470524",
	I0911 11:29:28.365712 2238380 command_runner.go:130] >       "uid": null,
	I0911 11:29:28.365720 2238380 command_runner.go:130] >       "username": "",
	I0911 11:29:28.365724 2238380 command_runner.go:130] >       "spec": null
	I0911 11:29:28.365728 2238380 command_runner.go:130] >     },
	I0911 11:29:28.365731 2238380 command_runner.go:130] >     {
	I0911 11:29:28.365737 2238380 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0911 11:29:28.365741 2238380 command_runner.go:130] >       "repoTags": [
	I0911 11:29:28.365745 2238380 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0911 11:29:28.365748 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.365752 2238380 command_runner.go:130] >       "repoDigests": [
	I0911 11:29:28.365764 2238380 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0911 11:29:28.365779 2238380 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0911 11:29:28.365789 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.365798 2238380 command_runner.go:130] >       "size": "53621675",
	I0911 11:29:28.365807 2238380 command_runner.go:130] >       "uid": null,
	I0911 11:29:28.365815 2238380 command_runner.go:130] >       "username": "",
	I0911 11:29:28.365819 2238380 command_runner.go:130] >       "spec": null
	I0911 11:29:28.365825 2238380 command_runner.go:130] >     },
	I0911 11:29:28.365829 2238380 command_runner.go:130] >     {
	I0911 11:29:28.365835 2238380 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0911 11:29:28.365842 2238380 command_runner.go:130] >       "repoTags": [
	I0911 11:29:28.365847 2238380 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0911 11:29:28.365852 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.365858 2238380 command_runner.go:130] >       "repoDigests": [
	I0911 11:29:28.365865 2238380 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0911 11:29:28.365874 2238380 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0911 11:29:28.365878 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.365885 2238380 command_runner.go:130] >       "size": "295456551",
	I0911 11:29:28.365889 2238380 command_runner.go:130] >       "uid": {
	I0911 11:29:28.365895 2238380 command_runner.go:130] >         "value": "0"
	I0911 11:29:28.365908 2238380 command_runner.go:130] >       },
	I0911 11:29:28.365914 2238380 command_runner.go:130] >       "username": "",
	I0911 11:29:28.365918 2238380 command_runner.go:130] >       "spec": null
	I0911 11:29:28.365924 2238380 command_runner.go:130] >     },
	I0911 11:29:28.365928 2238380 command_runner.go:130] >     {
	I0911 11:29:28.365937 2238380 command_runner.go:130] >       "id": "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77",
	I0911 11:29:28.365942 2238380 command_runner.go:130] >       "repoTags": [
	I0911 11:29:28.365949 2238380 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.1"
	I0911 11:29:28.365953 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.365959 2238380 command_runner.go:130] >       "repoDigests": [
	I0911 11:29:28.365966 2238380 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774",
	I0911 11:29:28.365976 2238380 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"
	I0911 11:29:28.365981 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.365985 2238380 command_runner.go:130] >       "size": "126972880",
	I0911 11:29:28.365991 2238380 command_runner.go:130] >       "uid": {
	I0911 11:29:28.365995 2238380 command_runner.go:130] >         "value": "0"
	I0911 11:29:28.366001 2238380 command_runner.go:130] >       },
	I0911 11:29:28.366005 2238380 command_runner.go:130] >       "username": "",
	I0911 11:29:28.366011 2238380 command_runner.go:130] >       "spec": null
	I0911 11:29:28.366015 2238380 command_runner.go:130] >     },
	I0911 11:29:28.366021 2238380 command_runner.go:130] >     {
	I0911 11:29:28.366027 2238380 command_runner.go:130] >       "id": "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac",
	I0911 11:29:28.366033 2238380 command_runner.go:130] >       "repoTags": [
	I0911 11:29:28.366038 2238380 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.1"
	I0911 11:29:28.366042 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.366046 2238380 command_runner.go:130] >       "repoDigests": [
	I0911 11:29:28.366053 2238380 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830",
	I0911 11:29:28.366063 2238380 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"
	I0911 11:29:28.366067 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.366075 2238380 command_runner.go:130] >       "size": "123163446",
	I0911 11:29:28.366079 2238380 command_runner.go:130] >       "uid": {
	I0911 11:29:28.366085 2238380 command_runner.go:130] >         "value": "0"
	I0911 11:29:28.366089 2238380 command_runner.go:130] >       },
	I0911 11:29:28.366093 2238380 command_runner.go:130] >       "username": "",
	I0911 11:29:28.366097 2238380 command_runner.go:130] >       "spec": null
	I0911 11:29:28.366100 2238380 command_runner.go:130] >     },
	I0911 11:29:28.366105 2238380 command_runner.go:130] >     {
	I0911 11:29:28.366111 2238380 command_runner.go:130] >       "id": "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5",
	I0911 11:29:28.366117 2238380 command_runner.go:130] >       "repoTags": [
	I0911 11:29:28.366122 2238380 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.1"
	I0911 11:29:28.366128 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.366132 2238380 command_runner.go:130] >       "repoDigests": [
	I0911 11:29:28.366139 2238380 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3",
	I0911 11:29:28.366148 2238380 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"
	I0911 11:29:28.366152 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.366156 2238380 command_runner.go:130] >       "size": "74680215",
	I0911 11:29:28.366162 2238380 command_runner.go:130] >       "uid": null,
	I0911 11:29:28.366166 2238380 command_runner.go:130] >       "username": "",
	I0911 11:29:28.366170 2238380 command_runner.go:130] >       "spec": null
	I0911 11:29:28.366173 2238380 command_runner.go:130] >     },
	I0911 11:29:28.366177 2238380 command_runner.go:130] >     {
	I0911 11:29:28.366183 2238380 command_runner.go:130] >       "id": "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a",
	I0911 11:29:28.366189 2238380 command_runner.go:130] >       "repoTags": [
	I0911 11:29:28.366194 2238380 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.1"
	I0911 11:29:28.366200 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.366204 2238380 command_runner.go:130] >       "repoDigests": [
	I0911 11:29:28.366213 2238380 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4",
	I0911 11:29:28.366257 2238380 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"
	I0911 11:29:28.366267 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.366274 2238380 command_runner.go:130] >       "size": "61477686",
	I0911 11:29:28.366279 2238380 command_runner.go:130] >       "uid": {
	I0911 11:29:28.366285 2238380 command_runner.go:130] >         "value": "0"
	I0911 11:29:28.366294 2238380 command_runner.go:130] >       },
	I0911 11:29:28.366299 2238380 command_runner.go:130] >       "username": "",
	I0911 11:29:28.366307 2238380 command_runner.go:130] >       "spec": null
	I0911 11:29:28.366311 2238380 command_runner.go:130] >     },
	I0911 11:29:28.366317 2238380 command_runner.go:130] >     {
	I0911 11:29:28.366323 2238380 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0911 11:29:28.366329 2238380 command_runner.go:130] >       "repoTags": [
	I0911 11:29:28.366334 2238380 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0911 11:29:28.366340 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.366347 2238380 command_runner.go:130] >       "repoDigests": [
	I0911 11:29:28.366358 2238380 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0911 11:29:28.366370 2238380 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0911 11:29:28.366377 2238380 command_runner.go:130] >       ],
	I0911 11:29:28.366387 2238380 command_runner.go:130] >       "size": "750414",
	I0911 11:29:28.366393 2238380 command_runner.go:130] >       "uid": {
	I0911 11:29:28.366401 2238380 command_runner.go:130] >         "value": "65535"
	I0911 11:29:28.366407 2238380 command_runner.go:130] >       },
	I0911 11:29:28.366417 2238380 command_runner.go:130] >       "username": "",
	I0911 11:29:28.366426 2238380 command_runner.go:130] >       "spec": null
	I0911 11:29:28.366435 2238380 command_runner.go:130] >     }
	I0911 11:29:28.366441 2238380 command_runner.go:130] >   ]
	I0911 11:29:28.366449 2238380 command_runner.go:130] > }
	I0911 11:29:28.367124 2238380 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:29:28.367151 2238380 cache_images.go:84] Images are preloaded, skipping loading
	I0911 11:29:28.367235 2238380 ssh_runner.go:195] Run: crio config
	I0911 11:29:28.423744 2238380 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0911 11:29:28.423774 2238380 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0911 11:29:28.423781 2238380 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0911 11:29:28.423784 2238380 command_runner.go:130] > #
	I0911 11:29:28.423791 2238380 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0911 11:29:28.423798 2238380 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0911 11:29:28.423803 2238380 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0911 11:29:28.423811 2238380 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0911 11:29:28.423814 2238380 command_runner.go:130] > # reload'.
	I0911 11:29:28.423867 2238380 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0911 11:29:28.423906 2238380 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0911 11:29:28.423917 2238380 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0911 11:29:28.423927 2238380 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0911 11:29:28.423937 2238380 command_runner.go:130] > [crio]
	I0911 11:29:28.423956 2238380 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0911 11:29:28.423964 2238380 command_runner.go:130] > # containers images, in this directory.
	I0911 11:29:28.423972 2238380 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0911 11:29:28.423986 2238380 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0911 11:29:28.423998 2238380 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0911 11:29:28.424007 2238380 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0911 11:29:28.424018 2238380 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0911 11:29:28.424028 2238380 command_runner.go:130] > storage_driver = "overlay"
	I0911 11:29:28.424037 2238380 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0911 11:29:28.424049 2238380 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0911 11:29:28.424056 2238380 command_runner.go:130] > storage_option = [
	I0911 11:29:28.424113 2238380 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0911 11:29:28.424123 2238380 command_runner.go:130] > ]
	I0911 11:29:28.424134 2238380 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0911 11:29:28.424147 2238380 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0911 11:29:28.424155 2238380 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0911 11:29:28.424165 2238380 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0911 11:29:28.424180 2238380 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0911 11:29:28.424191 2238380 command_runner.go:130] > # always happen on a node reboot
	I0911 11:29:28.424200 2238380 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0911 11:29:28.424212 2238380 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0911 11:29:28.424223 2238380 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0911 11:29:28.424238 2238380 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0911 11:29:28.424247 2238380 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0911 11:29:28.424260 2238380 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0911 11:29:28.424277 2238380 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0911 11:29:28.424287 2238380 command_runner.go:130] > # internal_wipe = true
	I0911 11:29:28.424296 2238380 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0911 11:29:28.424310 2238380 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0911 11:29:28.424324 2238380 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0911 11:29:28.424336 2238380 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0911 11:29:28.424349 2238380 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0911 11:29:28.424358 2238380 command_runner.go:130] > [crio.api]
	I0911 11:29:28.424368 2238380 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0911 11:29:28.424380 2238380 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0911 11:29:28.424394 2238380 command_runner.go:130] > # IP address on which the stream server will listen.
	I0911 11:29:28.424404 2238380 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0911 11:29:28.424418 2238380 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0911 11:29:28.424429 2238380 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0911 11:29:28.424439 2238380 command_runner.go:130] > # stream_port = "0"
	I0911 11:29:28.424448 2238380 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0911 11:29:28.424459 2238380 command_runner.go:130] > # stream_enable_tls = false
	I0911 11:29:28.424472 2238380 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0911 11:29:28.424483 2238380 command_runner.go:130] > # stream_idle_timeout = ""
	I0911 11:29:28.424497 2238380 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0911 11:29:28.424509 2238380 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0911 11:29:28.424518 2238380 command_runner.go:130] > # minutes.
	I0911 11:29:28.424524 2238380 command_runner.go:130] > # stream_tls_cert = ""
	I0911 11:29:28.424538 2238380 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0911 11:29:28.424553 2238380 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0911 11:29:28.424563 2238380 command_runner.go:130] > # stream_tls_key = ""
	I0911 11:29:28.424574 2238380 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0911 11:29:28.424587 2238380 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0911 11:29:28.424597 2238380 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0911 11:29:28.424602 2238380 command_runner.go:130] > # stream_tls_ca = ""
	I0911 11:29:28.424612 2238380 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:29:28.424616 2238380 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0911 11:29:28.424626 2238380 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:29:28.424633 2238380 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0911 11:29:28.424665 2238380 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0911 11:29:28.424687 2238380 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0911 11:29:28.424693 2238380 command_runner.go:130] > [crio.runtime]
	I0911 11:29:28.424712 2238380 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0911 11:29:28.424738 2238380 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0911 11:29:28.424749 2238380 command_runner.go:130] > # "nofile=1024:2048"
	I0911 11:29:28.424760 2238380 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0911 11:29:28.424770 2238380 command_runner.go:130] > # default_ulimits = [
	I0911 11:29:28.424778 2238380 command_runner.go:130] > # ]
	I0911 11:29:28.424790 2238380 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0911 11:29:28.424797 2238380 command_runner.go:130] > # no_pivot = false
	I0911 11:29:28.424803 2238380 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0911 11:29:28.424828 2238380 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0911 11:29:28.424841 2238380 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0911 11:29:28.424853 2238380 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0911 11:29:28.424864 2238380 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0911 11:29:28.424880 2238380 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:29:28.424891 2238380 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0911 11:29:28.424902 2238380 command_runner.go:130] > # Cgroup setting for conmon
	I0911 11:29:28.424915 2238380 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0911 11:29:28.424922 2238380 command_runner.go:130] > conmon_cgroup = "pod"
	I0911 11:29:28.424932 2238380 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0911 11:29:28.424944 2238380 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0911 11:29:28.424957 2238380 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:29:28.424967 2238380 command_runner.go:130] > conmon_env = [
	I0911 11:29:28.424977 2238380 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0911 11:29:28.424987 2238380 command_runner.go:130] > ]
	I0911 11:29:28.424996 2238380 command_runner.go:130] > # Additional environment variables to set for all the
	I0911 11:29:28.425008 2238380 command_runner.go:130] > # containers. These are overridden if set in the
	I0911 11:29:28.425021 2238380 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0911 11:29:28.425032 2238380 command_runner.go:130] > # default_env = [
	I0911 11:29:28.425037 2238380 command_runner.go:130] > # ]
	I0911 11:29:28.425049 2238380 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0911 11:29:28.425059 2238380 command_runner.go:130] > # selinux = false
	I0911 11:29:28.425072 2238380 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0911 11:29:28.425086 2238380 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0911 11:29:28.425099 2238380 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0911 11:29:28.425110 2238380 command_runner.go:130] > # seccomp_profile = ""
	I0911 11:29:28.425123 2238380 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0911 11:29:28.425137 2238380 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0911 11:29:28.425147 2238380 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0911 11:29:28.425157 2238380 command_runner.go:130] > # which might increase security.
	I0911 11:29:28.425169 2238380 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0911 11:29:28.425183 2238380 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0911 11:29:28.425199 2238380 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0911 11:29:28.425209 2238380 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0911 11:29:28.425220 2238380 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0911 11:29:28.425232 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:29:28.425240 2238380 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0911 11:29:28.425252 2238380 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0911 11:29:28.425263 2238380 command_runner.go:130] > # the cgroup blockio controller.
	I0911 11:29:28.425272 2238380 command_runner.go:130] > # blockio_config_file = ""
	I0911 11:29:28.425287 2238380 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0911 11:29:28.425297 2238380 command_runner.go:130] > # irqbalance daemon.
	I0911 11:29:28.425307 2238380 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0911 11:29:28.425322 2238380 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0911 11:29:28.425334 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:29:28.425344 2238380 command_runner.go:130] > # rdt_config_file = ""
	I0911 11:29:28.425354 2238380 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0911 11:29:28.425364 2238380 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0911 11:29:28.425376 2238380 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0911 11:29:28.425387 2238380 command_runner.go:130] > # separate_pull_cgroup = ""
	I0911 11:29:28.425401 2238380 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0911 11:29:28.425412 2238380 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0911 11:29:28.425422 2238380 command_runner.go:130] > # will be added.
	I0911 11:29:28.425429 2238380 command_runner.go:130] > # default_capabilities = [
	I0911 11:29:28.425439 2238380 command_runner.go:130] > # 	"CHOWN",
	I0911 11:29:28.425446 2238380 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0911 11:29:28.425455 2238380 command_runner.go:130] > # 	"FSETID",
	I0911 11:29:28.425461 2238380 command_runner.go:130] > # 	"FOWNER",
	I0911 11:29:28.425469 2238380 command_runner.go:130] > # 	"SETGID",
	I0911 11:29:28.425473 2238380 command_runner.go:130] > # 	"SETUID",
	I0911 11:29:28.425483 2238380 command_runner.go:130] > # 	"SETPCAP",
	I0911 11:29:28.425490 2238380 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0911 11:29:28.425500 2238380 command_runner.go:130] > # 	"KILL",
	I0911 11:29:28.425506 2238380 command_runner.go:130] > # ]
	I0911 11:29:28.425521 2238380 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0911 11:29:28.425534 2238380 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:29:28.425544 2238380 command_runner.go:130] > # default_sysctls = [
	I0911 11:29:28.425585 2238380 command_runner.go:130] > # ]
	I0911 11:29:28.425598 2238380 command_runner.go:130] > # List of devices on the host that a
	I0911 11:29:28.425610 2238380 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0911 11:29:28.425621 2238380 command_runner.go:130] > # allowed_devices = [
	I0911 11:29:28.425629 2238380 command_runner.go:130] > # 	"/dev/fuse",
	I0911 11:29:28.425640 2238380 command_runner.go:130] > # ]
	I0911 11:29:28.425649 2238380 command_runner.go:130] > # List of additional devices. specified as
	I0911 11:29:28.425661 2238380 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0911 11:29:28.425671 2238380 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0911 11:29:28.425698 2238380 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:29:28.425709 2238380 command_runner.go:130] > # additional_devices = [
	I0911 11:29:28.425715 2238380 command_runner.go:130] > # ]
	I0911 11:29:28.425730 2238380 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0911 11:29:28.425739 2238380 command_runner.go:130] > # cdi_spec_dirs = [
	I0911 11:29:28.425745 2238380 command_runner.go:130] > # 	"/etc/cdi",
	I0911 11:29:28.425750 2238380 command_runner.go:130] > # 	"/var/run/cdi",
	I0911 11:29:28.425754 2238380 command_runner.go:130] > # ]
	I0911 11:29:28.425767 2238380 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0911 11:29:28.425784 2238380 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0911 11:29:28.425795 2238380 command_runner.go:130] > # Defaults to false.
	I0911 11:29:28.425804 2238380 command_runner.go:130] > # device_ownership_from_security_context = false
	I0911 11:29:28.425817 2238380 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0911 11:29:28.425833 2238380 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0911 11:29:28.425843 2238380 command_runner.go:130] > # hooks_dir = [
	I0911 11:29:28.425848 2238380 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0911 11:29:28.425853 2238380 command_runner.go:130] > # ]
	I0911 11:29:28.425863 2238380 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0911 11:29:28.425878 2238380 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0911 11:29:28.425891 2238380 command_runner.go:130] > # its default mounts from the following two files:
	I0911 11:29:28.425899 2238380 command_runner.go:130] > #
	I0911 11:29:28.425910 2238380 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0911 11:29:28.425924 2238380 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0911 11:29:28.425936 2238380 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0911 11:29:28.425943 2238380 command_runner.go:130] > #
	I0911 11:29:28.425949 2238380 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0911 11:29:28.425963 2238380 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0911 11:29:28.425978 2238380 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0911 11:29:28.425999 2238380 command_runner.go:130] > #      only add mounts it finds in this file.
	I0911 11:29:28.426008 2238380 command_runner.go:130] > #
	I0911 11:29:28.426016 2238380 command_runner.go:130] > # default_mounts_file = ""
	I0911 11:29:28.426028 2238380 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0911 11:29:28.426043 2238380 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0911 11:29:28.426050 2238380 command_runner.go:130] > pids_limit = 1024
	I0911 11:29:28.426058 2238380 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0911 11:29:28.426071 2238380 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0911 11:29:28.426086 2238380 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0911 11:29:28.426100 2238380 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0911 11:29:28.426110 2238380 command_runner.go:130] > # log_size_max = -1
	I0911 11:29:28.426122 2238380 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0911 11:29:28.426133 2238380 command_runner.go:130] > # log_to_journald = false
	I0911 11:29:28.426143 2238380 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0911 11:29:28.426153 2238380 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0911 11:29:28.426158 2238380 command_runner.go:130] > # Path to directory for container attach sockets.
	I0911 11:29:28.426169 2238380 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0911 11:29:28.426182 2238380 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0911 11:29:28.426196 2238380 command_runner.go:130] > # bind_mount_prefix = ""
	I0911 11:29:28.426209 2238380 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0911 11:29:28.426218 2238380 command_runner.go:130] > # read_only = false
	I0911 11:29:28.426229 2238380 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0911 11:29:28.426243 2238380 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0911 11:29:28.426252 2238380 command_runner.go:130] > # live configuration reload.
	I0911 11:29:28.426256 2238380 command_runner.go:130] > # log_level = "info"
	I0911 11:29:28.426268 2238380 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0911 11:29:28.426280 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:29:28.426287 2238380 command_runner.go:130] > # log_filter = ""
	I0911 11:29:28.426301 2238380 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0911 11:29:28.426315 2238380 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0911 11:29:28.426325 2238380 command_runner.go:130] > # separated by comma.
	I0911 11:29:28.426332 2238380 command_runner.go:130] > # uid_mappings = ""
	I0911 11:29:28.426344 2238380 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0911 11:29:28.426353 2238380 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0911 11:29:28.426359 2238380 command_runner.go:130] > # separated by comma.
	I0911 11:29:28.426369 2238380 command_runner.go:130] > # gid_mappings = ""
	I0911 11:29:28.426383 2238380 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0911 11:29:28.426394 2238380 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:29:28.426412 2238380 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:29:28.426422 2238380 command_runner.go:130] > # minimum_mappable_uid = -1
	I0911 11:29:28.426437 2238380 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0911 11:29:28.426449 2238380 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:29:28.426458 2238380 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:29:28.426465 2238380 command_runner.go:130] > # minimum_mappable_gid = -1
	I0911 11:29:28.426479 2238380 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0911 11:29:28.426516 2238380 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0911 11:29:28.426529 2238380 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0911 11:29:28.426539 2238380 command_runner.go:130] > # ctr_stop_timeout = 30
	I0911 11:29:28.426545 2238380 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0911 11:29:28.426557 2238380 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0911 11:29:28.426569 2238380 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0911 11:29:28.426581 2238380 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0911 11:29:28.426590 2238380 command_runner.go:130] > drop_infra_ctr = false
	I0911 11:29:28.426604 2238380 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0911 11:29:28.426616 2238380 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0911 11:29:28.426630 2238380 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0911 11:29:28.426637 2238380 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0911 11:29:28.426647 2238380 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0911 11:29:28.426658 2238380 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0911 11:29:28.426667 2238380 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0911 11:29:28.426683 2238380 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0911 11:29:28.426693 2238380 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0911 11:29:28.426709 2238380 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0911 11:29:28.426723 2238380 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0911 11:29:28.426737 2238380 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0911 11:29:28.426745 2238380 command_runner.go:130] > # default_runtime = "runc"
	I0911 11:29:28.426758 2238380 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0911 11:29:28.426771 2238380 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0911 11:29:28.426789 2238380 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0911 11:29:28.426800 2238380 command_runner.go:130] > # creation as a file is not desired either.
	I0911 11:29:28.426817 2238380 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0911 11:29:28.426826 2238380 command_runner.go:130] > # the hostname is being managed dynamically.
	I0911 11:29:28.426836 2238380 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0911 11:29:28.426844 2238380 command_runner.go:130] > # ]
	I0911 11:29:28.426856 2238380 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0911 11:29:28.426870 2238380 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0911 11:29:28.426885 2238380 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0911 11:29:28.426899 2238380 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0911 11:29:28.426907 2238380 command_runner.go:130] > #
	I0911 11:29:28.426913 2238380 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0911 11:29:28.426922 2238380 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0911 11:29:28.426929 2238380 command_runner.go:130] > #  runtime_type = "oci"
	I0911 11:29:28.426941 2238380 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0911 11:29:28.426949 2238380 command_runner.go:130] > #  privileged_without_host_devices = false
	I0911 11:29:28.426959 2238380 command_runner.go:130] > #  allowed_annotations = []
	I0911 11:29:28.426969 2238380 command_runner.go:130] > # Where:
	I0911 11:29:28.426978 2238380 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0911 11:29:28.426992 2238380 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0911 11:29:28.427005 2238380 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0911 11:29:28.427014 2238380 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0911 11:29:28.427020 2238380 command_runner.go:130] > #   in $PATH.
	I0911 11:29:28.427034 2238380 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0911 11:29:28.427045 2238380 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0911 11:29:28.427056 2238380 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0911 11:29:28.427066 2238380 command_runner.go:130] > #   state.
	I0911 11:29:28.427077 2238380 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0911 11:29:28.427090 2238380 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0911 11:29:28.427104 2238380 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0911 11:29:28.427114 2238380 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0911 11:29:28.427120 2238380 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0911 11:29:28.427133 2238380 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0911 11:29:28.427142 2238380 command_runner.go:130] > #   The currently recognized values are:
	I0911 11:29:28.427153 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0911 11:29:28.427169 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0911 11:29:28.427182 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0911 11:29:28.427195 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0911 11:29:28.427209 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0911 11:29:28.427218 2238380 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0911 11:29:28.427234 2238380 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0911 11:29:28.427250 2238380 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0911 11:29:28.427262 2238380 command_runner.go:130] > #   should be moved to the container's cgroup
	I0911 11:29:28.427291 2238380 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0911 11:29:28.427303 2238380 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0911 11:29:28.427310 2238380 command_runner.go:130] > runtime_type = "oci"
	I0911 11:29:28.427318 2238380 command_runner.go:130] > runtime_root = "/run/runc"
	I0911 11:29:28.427322 2238380 command_runner.go:130] > runtime_config_path = ""
	I0911 11:29:28.427331 2238380 command_runner.go:130] > monitor_path = ""
	I0911 11:29:28.427341 2238380 command_runner.go:130] > monitor_cgroup = ""
	I0911 11:29:28.427349 2238380 command_runner.go:130] > monitor_exec_cgroup = ""
	I0911 11:29:28.427363 2238380 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0911 11:29:28.427373 2238380 command_runner.go:130] > # running containers
	I0911 11:29:28.427380 2238380 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0911 11:29:28.427393 2238380 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0911 11:29:28.427429 2238380 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0911 11:29:28.427443 2238380 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0911 11:29:28.427452 2238380 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0911 11:29:28.427464 2238380 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0911 11:29:28.427474 2238380 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0911 11:29:28.427485 2238380 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0911 11:29:28.427496 2238380 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0911 11:29:28.427506 2238380 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0911 11:29:28.427516 2238380 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0911 11:29:28.427525 2238380 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0911 11:29:28.427539 2238380 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0911 11:29:28.427552 2238380 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0911 11:29:28.427568 2238380 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0911 11:29:28.427585 2238380 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0911 11:29:28.427603 2238380 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0911 11:29:28.427614 2238380 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0911 11:29:28.427627 2238380 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0911 11:29:28.427643 2238380 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0911 11:29:28.427653 2238380 command_runner.go:130] > # Example:
	I0911 11:29:28.427664 2238380 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0911 11:29:28.427675 2238380 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0911 11:29:28.427686 2238380 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0911 11:29:28.427699 2238380 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0911 11:29:28.427707 2238380 command_runner.go:130] > # cpuset = 0
	I0911 11:29:28.427711 2238380 command_runner.go:130] > # cpushares = "0-1"
	I0911 11:29:28.427722 2238380 command_runner.go:130] > # Where:
	I0911 11:29:28.427737 2238380 command_runner.go:130] > # The workload name is workload-type.
	I0911 11:29:28.427750 2238380 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0911 11:29:28.427763 2238380 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0911 11:29:28.427776 2238380 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0911 11:29:28.427793 2238380 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0911 11:29:28.427804 2238380 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0911 11:29:28.427810 2238380 command_runner.go:130] > # 
	I0911 11:29:28.427819 2238380 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0911 11:29:28.427828 2238380 command_runner.go:130] > #
	I0911 11:29:28.427838 2238380 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0911 11:29:28.427852 2238380 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0911 11:29:28.427865 2238380 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0911 11:29:28.427879 2238380 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0911 11:29:28.427892 2238380 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0911 11:29:28.427900 2238380 command_runner.go:130] > [crio.image]
	I0911 11:29:28.427906 2238380 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0911 11:29:28.427916 2238380 command_runner.go:130] > # default_transport = "docker://"
	I0911 11:29:28.427931 2238380 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0911 11:29:28.427945 2238380 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:29:28.427955 2238380 command_runner.go:130] > # global_auth_file = ""
	I0911 11:29:28.427966 2238380 command_runner.go:130] > # The image used to instantiate infra containers.
	I0911 11:29:28.427978 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:29:28.427989 2238380 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0911 11:29:28.428000 2238380 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0911 11:29:28.428011 2238380 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:29:28.428025 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:29:28.428036 2238380 command_runner.go:130] > # pause_image_auth_file = ""
	I0911 11:29:28.428047 2238380 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0911 11:29:28.428061 2238380 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0911 11:29:28.428074 2238380 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0911 11:29:28.428086 2238380 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0911 11:29:28.428093 2238380 command_runner.go:130] > # pause_command = "/pause"
	I0911 11:29:28.428103 2238380 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0911 11:29:28.428121 2238380 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0911 11:29:28.428155 2238380 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0911 11:29:28.428170 2238380 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0911 11:29:28.428181 2238380 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0911 11:29:28.428188 2238380 command_runner.go:130] > # signature_policy = ""
	I0911 11:29:28.428198 2238380 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0911 11:29:28.428212 2238380 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0911 11:29:28.428222 2238380 command_runner.go:130] > # changing them here.
	I0911 11:29:28.428231 2238380 command_runner.go:130] > # insecure_registries = [
	I0911 11:29:28.428240 2238380 command_runner.go:130] > # ]
	I0911 11:29:28.428251 2238380 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0911 11:29:28.428263 2238380 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0911 11:29:28.428270 2238380 command_runner.go:130] > # image_volumes = "mkdir"
	I0911 11:29:28.428275 2238380 command_runner.go:130] > # Temporary directory to use for storing big files
	I0911 11:29:28.428279 2238380 command_runner.go:130] > # big_files_temporary_dir = ""
	I0911 11:29:28.428289 2238380 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0911 11:29:28.428296 2238380 command_runner.go:130] > # CNI plugins.
	I0911 11:29:28.428302 2238380 command_runner.go:130] > [crio.network]
	I0911 11:29:28.428312 2238380 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0911 11:29:28.428321 2238380 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0911 11:29:28.428328 2238380 command_runner.go:130] > # cni_default_network = ""
	I0911 11:29:28.428337 2238380 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0911 11:29:28.428345 2238380 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0911 11:29:28.428419 2238380 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0911 11:29:28.428449 2238380 command_runner.go:130] > # plugin_dirs = [
	I0911 11:29:28.428456 2238380 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0911 11:29:28.428462 2238380 command_runner.go:130] > # ]
	I0911 11:29:28.428471 2238380 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0911 11:29:28.428477 2238380 command_runner.go:130] > [crio.metrics]
	I0911 11:29:28.428485 2238380 command_runner.go:130] > # Globally enable or disable metrics support.
	I0911 11:29:28.428492 2238380 command_runner.go:130] > enable_metrics = true
	I0911 11:29:28.428500 2238380 command_runner.go:130] > # Specify enabled metrics collectors.
	I0911 11:29:28.428508 2238380 command_runner.go:130] > # Per default all metrics are enabled.
	I0911 11:29:28.428519 2238380 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0911 11:29:28.428529 2238380 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0911 11:29:28.428545 2238380 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0911 11:29:28.428554 2238380 command_runner.go:130] > # metrics_collectors = [
	I0911 11:29:28.428561 2238380 command_runner.go:130] > # 	"operations",
	I0911 11:29:28.428570 2238380 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0911 11:29:28.428581 2238380 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0911 11:29:28.428605 2238380 command_runner.go:130] > # 	"operations_errors",
	I0911 11:29:28.428616 2238380 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0911 11:29:28.428625 2238380 command_runner.go:130] > # 	"image_pulls_by_name",
	I0911 11:29:28.428636 2238380 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0911 11:29:28.428644 2238380 command_runner.go:130] > # 	"image_pulls_failures",
	I0911 11:29:28.428648 2238380 command_runner.go:130] > # 	"image_pulls_successes",
	I0911 11:29:28.428658 2238380 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0911 11:29:28.428665 2238380 command_runner.go:130] > # 	"image_layer_reuse",
	I0911 11:29:28.428676 2238380 command_runner.go:130] > # 	"containers_oom_total",
	I0911 11:29:28.428683 2238380 command_runner.go:130] > # 	"containers_oom",
	I0911 11:29:28.428693 2238380 command_runner.go:130] > # 	"processes_defunct",
	I0911 11:29:28.428701 2238380 command_runner.go:130] > # 	"operations_total",
	I0911 11:29:28.428711 2238380 command_runner.go:130] > # 	"operations_latency_seconds",
	I0911 11:29:28.428719 2238380 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0911 11:29:28.428729 2238380 command_runner.go:130] > # 	"operations_errors_total",
	I0911 11:29:28.428736 2238380 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0911 11:29:28.428746 2238380 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0911 11:29:28.428751 2238380 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0911 11:29:28.428757 2238380 command_runner.go:130] > # 	"image_pulls_success_total",
	I0911 11:29:28.428767 2238380 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0911 11:29:28.428775 2238380 command_runner.go:130] > # 	"containers_oom_count_total",
	I0911 11:29:28.428784 2238380 command_runner.go:130] > # ]
	I0911 11:29:28.428794 2238380 command_runner.go:130] > # The port on which the metrics server will listen.
	I0911 11:29:28.428804 2238380 command_runner.go:130] > # metrics_port = 9090
	I0911 11:29:28.428827 2238380 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0911 11:29:28.428836 2238380 command_runner.go:130] > # metrics_socket = ""
	I0911 11:29:28.428845 2238380 command_runner.go:130] > # The certificate for the secure metrics server.
	I0911 11:29:28.428860 2238380 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0911 11:29:28.428874 2238380 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0911 11:29:28.428885 2238380 command_runner.go:130] > # certificate on any modification event.
	I0911 11:29:28.428893 2238380 command_runner.go:130] > # metrics_cert = ""
	I0911 11:29:28.428906 2238380 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0911 11:29:28.428918 2238380 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0911 11:29:28.428927 2238380 command_runner.go:130] > # metrics_key = ""
	I0911 11:29:28.428938 2238380 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0911 11:29:28.428947 2238380 command_runner.go:130] > [crio.tracing]
	I0911 11:29:28.428957 2238380 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0911 11:29:28.428963 2238380 command_runner.go:130] > # enable_tracing = false
	I0911 11:29:28.428969 2238380 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0911 11:29:28.428975 2238380 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0911 11:29:28.428981 2238380 command_runner.go:130] > # Number of samples to collect per million spans.
	I0911 11:29:28.428988 2238380 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0911 11:29:28.428994 2238380 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0911 11:29:28.428999 2238380 command_runner.go:130] > [crio.stats]
	I0911 11:29:28.429005 2238380 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0911 11:29:28.429014 2238380 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0911 11:29:28.429019 2238380 command_runner.go:130] > # stats_collection_period = 0
	I0911 11:29:28.429073 2238380 command_runner.go:130] ! time="2023-09-11 11:29:28.366527896Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0911 11:29:28.429096 2238380 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0911 11:29:28.429185 2238380 cni.go:84] Creating CNI manager for ""
	I0911 11:29:28.429200 2238380 cni.go:136] 3 nodes found, recommending kindnet
	I0911 11:29:28.429221 2238380 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:29:28.429243 2238380 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.237 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-378707 NodeName:multinode-378707 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.237 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:
/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:29:28.429408 2238380 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.237
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-378707"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.237
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:29:28.429493 2238380 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-378707 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.237
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:29:28.429564 2238380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:29:28.439251 2238380 command_runner.go:130] > kubeadm
	I0911 11:29:28.439277 2238380 command_runner.go:130] > kubectl
	I0911 11:29:28.439282 2238380 command_runner.go:130] > kubelet
	I0911 11:29:28.439354 2238380 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:29:28.439429 2238380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:29:28.449127 2238380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (376 bytes)
	I0911 11:29:28.465579 2238380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:29:28.482667 2238380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2103 bytes)
	I0911 11:29:28.500289 2238380 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I0911 11:29:28.504486 2238380 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.237	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:29:28.516576 2238380 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707 for IP: 192.168.39.237
	I0911 11:29:28.516623 2238380 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:29:28.516852 2238380 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 11:29:28.516950 2238380 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 11:29:28.517115 2238380 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key
	I0911 11:29:28.517185 2238380 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.key.cf509944
	I0911 11:29:28.517221 2238380 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.key
	I0911 11:29:28.517232 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0911 11:29:28.517245 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0911 11:29:28.517258 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0911 11:29:28.517275 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0911 11:29:28.517297 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0911 11:29:28.517311 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0911 11:29:28.517321 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0911 11:29:28.517330 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0911 11:29:28.517382 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 11:29:28.517411 2238380 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 11:29:28.517421 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 11:29:28.517444 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:29:28.517471 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:29:28.517503 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 11:29:28.517565 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:29:28.517609 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> /usr/share/ca-certificates/22224712.pem
	I0911 11:29:28.517629 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:29:28.517646 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem -> /usr/share/ca-certificates/2222471.pem
	I0911 11:29:28.518440 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:29:28.542535 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 11:29:28.567303 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:29:28.592742 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 11:29:28.618599 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:29:28.643163 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 11:29:28.669669 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:29:28.694726 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:29:28.721760 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 11:29:28.747387 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:29:28.774047 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 11:29:28.800158 2238380 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:29:28.817991 2238380 ssh_runner.go:195] Run: openssl version
	I0911 11:29:28.824388 2238380 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0911 11:29:28.824493 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 11:29:28.835966 2238380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 11:29:28.841251 2238380 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:29:28.841402 2238380 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:29:28.841472 2238380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 11:29:28.847825 2238380 command_runner.go:130] > 3ec20f2e
	I0911 11:29:28.847929 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:29:28.860043 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:29:28.871851 2238380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:29:28.877007 2238380 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:29:28.877173 2238380 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:29:28.877345 2238380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:29:28.883163 2238380 command_runner.go:130] > b5213941
	I0911 11:29:28.883423 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:29:28.895992 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 11:29:28.907837 2238380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 11:29:28.912967 2238380 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:29:28.913012 2238380 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:29:28.913067 2238380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 11:29:28.919101 2238380 command_runner.go:130] > 51391683
	I0911 11:29:28.919190 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 11:29:28.931216 2238380 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:29:28.936072 2238380 command_runner.go:130] > ca.crt
	I0911 11:29:28.936099 2238380 command_runner.go:130] > ca.key
	I0911 11:29:28.936107 2238380 command_runner.go:130] > healthcheck-client.crt
	I0911 11:29:28.936112 2238380 command_runner.go:130] > healthcheck-client.key
	I0911 11:29:28.936117 2238380 command_runner.go:130] > peer.crt
	I0911 11:29:28.936121 2238380 command_runner.go:130] > peer.key
	I0911 11:29:28.936124 2238380 command_runner.go:130] > server.crt
	I0911 11:29:28.936128 2238380 command_runner.go:130] > server.key
	I0911 11:29:28.936318 2238380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 11:29:28.942756 2238380 command_runner.go:130] > Certificate will not expire
	I0911 11:29:28.942927 2238380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 11:29:28.948954 2238380 command_runner.go:130] > Certificate will not expire
	I0911 11:29:28.949158 2238380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 11:29:28.955931 2238380 command_runner.go:130] > Certificate will not expire
	I0911 11:29:28.956010 2238380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 11:29:28.962319 2238380 command_runner.go:130] > Certificate will not expire
	I0911 11:29:28.962411 2238380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 11:29:28.968253 2238380 command_runner.go:130] > Certificate will not expire
	I0911 11:29:28.968491 2238380 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 11:29:28.974579 2238380 command_runner.go:130] > Certificate will not expire
	I0911 11:29:28.974672 2238380 kubeadm.go:404] StartCluster: {Name:multinode-378707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:fal
se ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClien
tPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:29:28.974821 2238380 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:29:28.974868 2238380 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:29:29.009083 2238380 cri.go:89] found id: ""
	I0911 11:29:29.009179 2238380 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 11:29:29.020356 2238380 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0911 11:29:29.020381 2238380 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0911 11:29:29.020388 2238380 command_runner.go:130] > /var/lib/minikube/etcd:
	I0911 11:29:29.020391 2238380 command_runner.go:130] > member
	I0911 11:29:29.020446 2238380 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 11:29:29.020467 2238380 kubeadm.go:636] restartCluster start
	I0911 11:29:29.020533 2238380 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 11:29:29.030517 2238380 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:29.031078 2238380 kubeconfig.go:92] found "multinode-378707" server: "https://192.168.39.237:8443"
	I0911 11:29:29.031477 2238380 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:29:29.031701 2238380 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:29:29.032416 2238380 cert_rotation.go:137] Starting client certificate rotation controller
	I0911 11:29:29.032703 2238380 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 11:29:29.042716 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:29.042809 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:29.055664 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:29.055689 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:29.055737 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:29.067765 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:29.568572 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:29.568680 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:29.582596 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:30.068135 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:30.068239 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:30.081383 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:30.567910 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:30.568032 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:30.581095 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:31.068176 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:31.068330 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:31.081263 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:31.568915 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:31.569000 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:31.581760 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:32.068318 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:32.068420 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:32.081771 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:32.568280 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:32.568385 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:32.581496 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:33.068833 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:33.068939 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:33.083297 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:33.568898 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:33.569006 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:33.583019 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:34.068531 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:34.068661 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:34.083399 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:34.567992 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:34.568104 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:34.580427 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:35.067965 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:35.068075 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:35.080749 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:35.568302 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:35.568408 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:35.581530 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:36.068601 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:36.068698 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:36.081281 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:36.568917 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:36.569011 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:36.582115 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:37.068608 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:37.068728 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:37.081172 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:37.568559 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:37.568678 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:37.581811 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:38.068879 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:38.068982 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:38.081747 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:38.568198 2238380 api_server.go:166] Checking apiserver status ...
	I0911 11:29:38.568300 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:29:38.580985 2238380 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:29:39.043768 2238380 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 11:29:39.043806 2238380 kubeadm.go:1128] stopping kube-system containers ...
	I0911 11:29:39.043822 2238380 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 11:29:39.043881 2238380 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:29:39.080269 2238380 cri.go:89] found id: ""
	I0911 11:29:39.080352 2238380 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 11:29:39.100789 2238380 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 11:29:39.112446 2238380 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0911 11:29:39.112472 2238380 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0911 11:29:39.112479 2238380 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0911 11:29:39.112488 2238380 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:29:39.112521 2238380 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:29:39.112570 2238380 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 11:29:39.124532 2238380 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 11:29:39.124576 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 11:29:39.256555 2238380 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 11:29:39.256584 2238380 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0911 11:29:39.256591 2238380 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0911 11:29:39.256608 2238380 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0911 11:29:39.256618 2238380 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0911 11:29:39.256628 2238380 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0911 11:29:39.256636 2238380 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0911 11:29:39.256644 2238380 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0911 11:29:39.256658 2238380 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0911 11:29:39.256668 2238380 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0911 11:29:39.256675 2238380 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0911 11:29:39.256681 2238380 command_runner.go:130] > [certs] Using the existing "sa" key
	I0911 11:29:39.256727 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 11:29:39.313787 2238380 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 11:29:39.404173 2238380 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 11:29:39.561328 2238380 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 11:29:39.653632 2238380 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 11:29:39.860512 2238380 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 11:29:39.863929 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 11:29:39.936598 2238380 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:29:39.937972 2238380 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:29:39.938171 2238380 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0911 11:29:40.100701 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 11:29:40.178911 2238380 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 11:29:40.178937 2238380 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 11:29:40.181421 2238380 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 11:29:40.184182 2238380 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 11:29:40.187836 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 11:29:40.262013 2238380 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 11:29:40.273319 2238380 api_server.go:52] waiting for apiserver process to appear ...
	I0911 11:29:40.273400 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:29:40.290213 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:29:40.806886 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:29:41.306876 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:29:41.807081 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:29:42.307457 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:29:42.806639 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:29:42.826124 2238380 command_runner.go:130] > 1108
	I0911 11:29:42.826353 2238380 api_server.go:72] duration metric: took 2.553062748s to wait for apiserver process to appear ...
	I0911 11:29:42.826374 2238380 api_server.go:88] waiting for apiserver healthz status ...
	I0911 11:29:42.826391 2238380 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0911 11:29:42.829836 2238380 api_server.go:269] stopped: https://192.168.39.237:8443/healthz: Get "https://192.168.39.237:8443/healthz": dial tcp 192.168.39.237:8443: connect: connection refused
	I0911 11:29:42.829880 2238380 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0911 11:29:42.830350 2238380 api_server.go:269] stopped: https://192.168.39.237:8443/healthz: Get "https://192.168.39.237:8443/healthz": dial tcp 192.168.39.237:8443: connect: connection refused
	I0911 11:29:43.331099 2238380 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0911 11:29:47.361906 2238380 api_server.go:279] https://192.168.39.237:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 11:29:47.361945 2238380 api_server.go:103] status: https://192.168.39.237:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 11:29:47.361960 2238380 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0911 11:29:47.453842 2238380 api_server.go:279] https://192.168.39.237:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 11:29:47.453893 2238380 api_server.go:103] status: https://192.168.39.237:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 11:29:47.830474 2238380 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0911 11:29:47.840289 2238380 api_server.go:279] https://192.168.39.237:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 11:29:47.840325 2238380 api_server.go:103] status: https://192.168.39.237:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 11:29:48.331075 2238380 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0911 11:29:48.340683 2238380 api_server.go:279] https://192.168.39.237:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 11:29:48.340711 2238380 api_server.go:103] status: https://192.168.39.237:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 11:29:48.831401 2238380 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0911 11:29:48.841655 2238380 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0911 11:29:48.841780 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/version
	I0911 11:29:48.841791 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:48.841803 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:48.841824 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:48.853464 2238380 round_trippers.go:574] Response Status: 200 OK in 11 milliseconds
	I0911 11:29:48.853499 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:48.853511 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:48.853520 2238380 round_trippers.go:580]     Content-Length: 263
	I0911 11:29:48.853529 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:48 GMT
	I0911 11:29:48.853538 2238380 round_trippers.go:580]     Audit-Id: 6c411457-2d30-4257-9e60-70e0f9c3c456
	I0911 11:29:48.853547 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:48.853558 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:48.853571 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:48.853611 2238380 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0911 11:29:48.853739 2238380 api_server.go:141] control plane version: v1.28.1
	I0911 11:29:48.853764 2238380 api_server.go:131] duration metric: took 6.027383095s to wait for apiserver health ...
	I0911 11:29:48.853776 2238380 cni.go:84] Creating CNI manager for ""
	I0911 11:29:48.853794 2238380 cni.go:136] 3 nodes found, recommending kindnet
	I0911 11:29:48.855946 2238380 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0911 11:29:48.857832 2238380 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0911 11:29:48.868705 2238380 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0911 11:29:48.868741 2238380 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0911 11:29:48.868751 2238380 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0911 11:29:48.868762 2238380 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:29:48.868771 2238380 command_runner.go:130] > Access: 2023-09-11 11:29:14.744855626 +0000
	I0911 11:29:48.868780 2238380 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0911 11:29:48.868788 2238380 command_runner.go:130] > Change: 2023-09-11 11:29:12.801855626 +0000
	I0911 11:29:48.868794 2238380 command_runner.go:130] >  Birth: -
	I0911 11:29:48.869653 2238380 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0911 11:29:48.869676 2238380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0911 11:29:48.920143 2238380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0911 11:29:50.184852 2238380 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0911 11:29:50.216943 2238380 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0911 11:29:50.224615 2238380 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0911 11:29:50.247535 2238380 command_runner.go:130] > daemonset.apps/kindnet configured
	I0911 11:29:50.250001 2238380 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.329809039s)
	I0911 11:29:50.250051 2238380 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 11:29:50.250257 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:29:50.250288 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.250301 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.250317 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.255271 2238380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:29:50.255293 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.255300 2238380 round_trippers.go:580]     Audit-Id: f1d6ca9a-57a7-4f18-b2da-78659a2b9533
	I0911 11:29:50.255306 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.255311 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.255317 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.255322 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.255327 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.257576 2238380 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"818"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83150 chars]
	I0911 11:29:50.263817 2238380 system_pods.go:59] 12 kube-system pods found
	I0911 11:29:50.263883 2238380 system_pods.go:61] "coredns-5dd5756b68-fzpjk" [f72f6ba0-92a3-4108-a37f-e6ad5009c37c] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 11:29:50.263899 2238380 system_pods.go:61] "etcd-multinode-378707" [30882221-42a4-42a4-9911-63a8ff26c903] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 11:29:50.263910 2238380 system_pods.go:61] "kindnet-gxpnd" [e59da67c-e818-45db-bbcd-db99a4310bf1] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0911 11:29:50.263922 2238380 system_pods.go:61] "kindnet-lrktz" [980de8e0-df33-41d2-847f-3f600dfcc611] Running
	I0911 11:29:50.263931 2238380 system_pods.go:61] "kindnet-p8h9v" [81e27af1-dd2f-464f-9daf-e1bdf9f1bdf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0911 11:29:50.263944 2238380 system_pods.go:61] "kube-apiserver-multinode-378707" [6cc96039-3a17-4243-93b6-4bf3ed6f69a8] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 11:29:50.263956 2238380 system_pods.go:61] "kube-controller-manager-multinode-378707" [7bd2ecf1-1558-4680-9075-d30d989a0568] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 11:29:50.263978 2238380 system_pods.go:61] "kube-proxy-8gcxx" [f1bb96ad-eb2c-4eeb-b1f8-abb67568b5e7] Running
	I0911 11:29:50.263985 2238380 system_pods.go:61] "kube-proxy-kwvbm" [6a1764e3-ef89-4687-874e-03baf3e90296] Running
	I0911 11:29:50.263995 2238380 system_pods.go:61] "kube-proxy-snbc8" [c3bb9995-3cd6-4433-a326-3da0a7f4aff3] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 11:29:50.264008 2238380 system_pods.go:61] "kube-scheduler-multinode-378707" [51055ddb-deff-4b5d-9a90-0cd2c9dc8aa7] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 11:29:50.264020 2238380 system_pods.go:61] "storage-provisioner" [77e1a93d-fc34-4f05-8320-169bb6c93e46] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 11:29:50.264033 2238380 system_pods.go:74] duration metric: took 13.972613ms to wait for pod list to return data ...
	I0911 11:29:50.264082 2238380 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:29:50.264194 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes
	I0911 11:29:50.264206 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.264217 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.264227 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.267448 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:50.267475 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.267486 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.267494 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.267502 2238380 round_trippers.go:580]     Audit-Id: fd6d297f-6218-491f-b4d3-1196f4f6b0ba
	I0911 11:29:50.267511 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.267524 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.267540 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.268452 2238380 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"818"},"items":[{"metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15252 chars]
	I0911 11:29:50.269667 2238380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:29:50.269737 2238380 node_conditions.go:123] node cpu capacity is 2
	I0911 11:29:50.269819 2238380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:29:50.269827 2238380 node_conditions.go:123] node cpu capacity is 2
	I0911 11:29:50.269834 2238380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:29:50.269840 2238380 node_conditions.go:123] node cpu capacity is 2
	I0911 11:29:50.269846 2238380 node_conditions.go:105] duration metric: took 5.749149ms to run NodePressure ...
	I0911 11:29:50.269876 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 11:29:50.674466 2238380 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0911 11:29:50.674502 2238380 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0911 11:29:50.674531 2238380 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 11:29:50.674650 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0911 11:29:50.674663 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.674676 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.674686 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.682473 2238380 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0911 11:29:50.682504 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.682515 2238380 round_trippers.go:580]     Audit-Id: a61a4b45-3604-438d-9b09-b32a8dc53d3c
	I0911 11:29:50.682525 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.682533 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.682541 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.682550 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.682561 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.685022 2238380 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"827"},"items":[{"metadata":{"name":"etcd-multinode-378707","namespace":"kube-system","uid":"30882221-42a4-42a4-9911-63a8ff26c903","resourceVersion":"747","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.237:2379","kubernetes.io/config.hash":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.mirror":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.seen":"2023-09-11T11:19:21.954681050Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 28886 chars]
	I0911 11:29:50.686588 2238380 kubeadm.go:787] kubelet initialised
	I0911 11:29:50.686617 2238380 kubeadm.go:788] duration metric: took 12.074627ms waiting for restarted kubelet to initialise ...
	I0911 11:29:50.686629 2238380 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:29:50.686725 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:29:50.686739 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.686749 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.686758 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.690731 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:50.690761 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.690769 2238380 round_trippers.go:580]     Audit-Id: 327216b2-f15e-4683-88e9-5d16d7274d92
	I0911 11:29:50.690776 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.690782 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.690787 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.690792 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.690798 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.692037 2238380 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"827"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 83218 chars]
	I0911 11:29:50.695867 2238380 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:50.696014 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:29:50.696040 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.696051 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.696062 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.699914 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:50.699938 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.699949 2238380 round_trippers.go:580]     Audit-Id: 7728d7e5-5431-470a-b9a3-917e2bed787a
	I0911 11:29:50.699959 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.699968 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.699975 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.699981 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.699986 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.700782 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:29:50.701339 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:50.701358 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.701369 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.701380 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.703963 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:50.703986 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.703995 2238380 round_trippers.go:580]     Audit-Id: 92e985bd-a7ca-47ca-b733-3d050d3f4df0
	I0911 11:29:50.704003 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.704011 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.704018 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.704025 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.704033 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.704230 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:50.704587 2238380 pod_ready.go:97] node "multinode-378707" hosting pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-378707" has status "Ready":"False"
	I0911 11:29:50.704610 2238380 pod_ready.go:81] duration metric: took 8.708139ms waiting for pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace to be "Ready" ...
	E0911 11:29:50.704622 2238380 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-378707" hosting pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-378707" has status "Ready":"False"
	I0911 11:29:50.704638 2238380 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:50.704707 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-378707
	I0911 11:29:50.704717 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.704727 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.704738 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.707446 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:50.707473 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.707483 2238380 round_trippers.go:580]     Audit-Id: 608cf477-7399-4d83-97fc-dcf0d093c9cd
	I0911 11:29:50.707492 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.707501 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.707510 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.707518 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.707526 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.707644 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-378707","namespace":"kube-system","uid":"30882221-42a4-42a4-9911-63a8ff26c903","resourceVersion":"747","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.237:2379","kubernetes.io/config.hash":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.mirror":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.seen":"2023-09-11T11:19:21.954681050Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 6077 chars]
	I0911 11:29:50.708207 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:50.708230 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.708241 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.708250 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.711943 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:50.711970 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.711980 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.711988 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.711996 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.712005 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.712014 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.712022 2238380 round_trippers.go:580]     Audit-Id: 5af3b254-e9f0-47ec-81ad-c57e610cb6b1
	I0911 11:29:50.713127 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:50.713449 2238380 pod_ready.go:97] node "multinode-378707" hosting pod "etcd-multinode-378707" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-378707" has status "Ready":"False"
	I0911 11:29:50.713466 2238380 pod_ready.go:81] duration metric: took 8.816943ms waiting for pod "etcd-multinode-378707" in "kube-system" namespace to be "Ready" ...
	E0911 11:29:50.713474 2238380 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-378707" hosting pod "etcd-multinode-378707" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-378707" has status "Ready":"False"
	I0911 11:29:50.713489 2238380 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:50.713568 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-378707
	I0911 11:29:50.713576 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.713583 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.713590 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.716788 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:50.716825 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.716836 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.716845 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.716853 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.716861 2238380 round_trippers.go:580]     Audit-Id: 1fd513c5-fc66-44f3-b23d-5b32dcda10d5
	I0911 11:29:50.716870 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.716878 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.717752 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-378707","namespace":"kube-system","uid":"6cc96039-3a17-4243-93b6-4bf3ed6f69a8","resourceVersion":"744","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.237:8443","kubernetes.io/config.hash":"4ac3958118ce3f6e7dda52fe654787ec","kubernetes.io/config.mirror":"4ac3958118ce3f6e7dda52fe654787ec","kubernetes.io/config.seen":"2023-09-11T11:19:21.954683933Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7633 chars]
	I0911 11:29:50.718359 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:50.718379 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.718390 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.718400 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.722023 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:50.722058 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.722067 2238380 round_trippers.go:580]     Audit-Id: bd68a70d-72d2-4cbb-a1b5-05f3af1fdad3
	I0911 11:29:50.722073 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.722079 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.722085 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.722091 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.722097 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.722759 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:50.723245 2238380 pod_ready.go:97] node "multinode-378707" hosting pod "kube-apiserver-multinode-378707" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-378707" has status "Ready":"False"
	I0911 11:29:50.723270 2238380 pod_ready.go:81] duration metric: took 9.773648ms waiting for pod "kube-apiserver-multinode-378707" in "kube-system" namespace to be "Ready" ...
	E0911 11:29:50.723282 2238380 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-378707" hosting pod "kube-apiserver-multinode-378707" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-378707" has status "Ready":"False"
	I0911 11:29:50.723292 2238380 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:50.723373 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-378707
	I0911 11:29:50.723383 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.723394 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.723404 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.726114 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:50.726136 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.726144 2238380 round_trippers.go:580]     Audit-Id: 057e5972-a270-4c95-b661-7cfb9a07468a
	I0911 11:29:50.726150 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.726156 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.726167 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.726177 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.726185 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.726328 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-378707","namespace":"kube-system","uid":"7bd2ecf1-1558-4680-9075-d30d989a0568","resourceVersion":"748","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee5490370c5fc8b73824fd7337130039","kubernetes.io/config.mirror":"ee5490370c5fc8b73824fd7337130039","kubernetes.io/config.seen":"2023-09-11T11:19:21.954684910Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 7216 chars]
	I0911 11:29:50.726815 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:50.726827 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.726835 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.726843 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.729181 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:50.729205 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.729215 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.729224 2238380 round_trippers.go:580]     Audit-Id: d1301389-38df-4aac-aafc-8f2bd7293496
	I0911 11:29:50.729232 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.729240 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.729248 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.729256 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.729397 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:50.729827 2238380 pod_ready.go:97] node "multinode-378707" hosting pod "kube-controller-manager-multinode-378707" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-378707" has status "Ready":"False"
	I0911 11:29:50.729870 2238380 pod_ready.go:81] duration metric: took 6.568584ms waiting for pod "kube-controller-manager-multinode-378707" in "kube-system" namespace to be "Ready" ...
	E0911 11:29:50.729881 2238380 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-378707" hosting pod "kube-controller-manager-multinode-378707" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-378707" has status "Ready":"False"
	I0911 11:29:50.729891 2238380 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8gcxx" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:50.875446 2238380 request.go:629] Waited for 145.404245ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gcxx
	I0911 11:29:50.875537 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gcxx
	I0911 11:29:50.875543 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:50.875555 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:50.875570 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:50.879875 2238380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:29:50.879909 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:50.879921 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:50.879930 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:50.879939 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:50 GMT
	I0911 11:29:50.879947 2238380 round_trippers.go:580]     Audit-Id: e89fba1d-287e-4d15-b510-676d54b08605
	I0911 11:29:50.879962 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:50.879974 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:50.880136 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8gcxx","generateName":"kube-proxy-","namespace":"kube-system","uid":"f1bb96ad-eb2c-4eeb-b1f8-abb67568b5e7","resourceVersion":"506","creationTimestamp":"2023-09-11T11:20:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0911 11:29:51.075178 2238380 request.go:629] Waited for 194.424747ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:29:51.075271 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:29:51.075278 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:51.075289 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:51.075296 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:51.078114 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:51.078140 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:51.078152 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:51 GMT
	I0911 11:29:51.078162 2238380 round_trippers.go:580]     Audit-Id: 06bcac0a-1f15-4746-a8fa-76c1bed595bc
	I0911 11:29:51.078172 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:51.078182 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:51.078193 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:51.078202 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:51.078327 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"737","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3684 chars]
	I0911 11:29:51.078688 2238380 pod_ready.go:92] pod "kube-proxy-8gcxx" in "kube-system" namespace has status "Ready":"True"
	I0911 11:29:51.078707 2238380 pod_ready.go:81] duration metric: took 348.803874ms waiting for pod "kube-proxy-8gcxx" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:51.078721 2238380 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-kwvbm" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:51.275241 2238380 request.go:629] Waited for 196.425441ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kwvbm
	I0911 11:29:51.275334 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kwvbm
	I0911 11:29:51.275341 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:51.275352 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:51.275361 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:51.278819 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:51.278845 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:51.278853 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:51.278859 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:51.278866 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:51.278875 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:51.278884 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:51 GMT
	I0911 11:29:51.278893 2238380 round_trippers.go:580]     Audit-Id: 87884940-8944-4862-a925-b6c883a2bf06
	I0911 11:29:51.279014 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kwvbm","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a1764e3-ef89-4687-874e-03baf3e90296","resourceVersion":"711","creationTimestamp":"2023-09-11T11:21:07Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:21:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0911 11:29:51.474916 2238380 request.go:629] Waited for 195.31976ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m03
	I0911 11:29:51.474979 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m03
	I0911 11:29:51.474985 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:51.474992 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:51.474999 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:51.478294 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:51.478319 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:51.478335 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:51 GMT
	I0911 11:29:51.478344 2238380 round_trippers.go:580]     Audit-Id: 0b497a86-b7e6-4663-b1b7-d445868fd85f
	I0911 11:29:51.478353 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:51.478362 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:51.478368 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:51.478373 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:51.478574 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m03","uid":"b37f601c-a45d-4f04-b0fa-26387559968e","resourceVersion":"736","creationTimestamp":"2023-09-11T11:21:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:21:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0911 11:29:51.479082 2238380 pod_ready.go:92] pod "kube-proxy-kwvbm" in "kube-system" namespace has status "Ready":"True"
	I0911 11:29:51.479111 2238380 pod_ready.go:81] duration metric: took 400.379386ms waiting for pod "kube-proxy-kwvbm" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:51.479127 2238380 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-snbc8" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:51.675694 2238380 request.go:629] Waited for 196.465599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-snbc8
	I0911 11:29:51.675771 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-snbc8
	I0911 11:29:51.675776 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:51.675784 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:51.675791 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:51.678996 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:51.679025 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:51.679041 2238380 round_trippers.go:580]     Audit-Id: 8ae909ab-1934-49a8-acfd-ec3447f49cac
	I0911 11:29:51.679054 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:51.679066 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:51.679077 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:51.679089 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:51.679101 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:51 GMT
	I0911 11:29:51.679481 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-snbc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"c3bb9995-3cd6-4433-a326-3da0a7f4aff3","resourceVersion":"826","creationTimestamp":"2023-09-11T11:19:35Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0911 11:29:51.875443 2238380 request.go:629] Waited for 195.471931ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:51.875533 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:51.875540 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:51.875552 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:51.875568 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:51.878339 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:51.878365 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:51.878377 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:51.878385 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:51.878395 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:51.878409 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:51.878418 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:51 GMT
	I0911 11:29:51.878426 2238380 round_trippers.go:580]     Audit-Id: 518018e8-7119-4fad-a337-1e9f17998441
	I0911 11:29:51.880332 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:51.881264 2238380 pod_ready.go:97] node "multinode-378707" hosting pod "kube-proxy-snbc8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-378707" has status "Ready":"False"
	I0911 11:29:51.881286 2238380 pod_ready.go:81] duration metric: took 402.152077ms waiting for pod "kube-proxy-snbc8" in "kube-system" namespace to be "Ready" ...
	E0911 11:29:51.881293 2238380 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-378707" hosting pod "kube-proxy-snbc8" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-378707" has status "Ready":"False"
	I0911 11:29:51.881300 2238380 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:52.074781 2238380 request.go:629] Waited for 193.348218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-378707
	I0911 11:29:52.074879 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-378707
	I0911 11:29:52.074886 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:52.074899 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:52.074910 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:52.078010 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:52.078034 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:52.078042 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:52.078047 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:52.078053 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:52 GMT
	I0911 11:29:52.078058 2238380 round_trippers.go:580]     Audit-Id: 379eb3dc-b3d4-4b80-875c-c1bd2a8afeef
	I0911 11:29:52.078063 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:52.078068 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:52.078614 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-378707","namespace":"kube-system","uid":"51055ddb-deff-4b5d-9a90-0cd2c9dc8aa7","resourceVersion":"742","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"47ac46ded21e848957a0f2d3767001da","kubernetes.io/config.mirror":"47ac46ded21e848957a0f2d3767001da","kubernetes.io/config.seen":"2023-09-11T11:19:21.954685589Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4928 chars]
	I0911 11:29:52.275528 2238380 request.go:629] Waited for 196.378546ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:52.275606 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:52.275611 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:52.275620 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:52.275628 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:52.278636 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:52.278660 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:52.278671 2238380 round_trippers.go:580]     Audit-Id: 7a4c9aa0-29f9-4441-ba3d-50a0d0892048
	I0911 11:29:52.278680 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:52.278689 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:52.278698 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:52.278708 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:52.278713 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:52 GMT
	I0911 11:29:52.278835 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:52.279290 2238380 pod_ready.go:97] node "multinode-378707" hosting pod "kube-scheduler-multinode-378707" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-378707" has status "Ready":"False"
	I0911 11:29:52.279322 2238380 pod_ready.go:81] duration metric: took 398.01587ms waiting for pod "kube-scheduler-multinode-378707" in "kube-system" namespace to be "Ready" ...
	E0911 11:29:52.279333 2238380 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-378707" hosting pod "kube-scheduler-multinode-378707" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-378707" has status "Ready":"False"
	I0911 11:29:52.279344 2238380 pod_ready.go:38] duration metric: took 1.592702754s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:29:52.279375 2238380 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 11:29:52.292318 2238380 command_runner.go:130] > -16
	I0911 11:29:52.292368 2238380 ops.go:34] apiserver oom_adj: -16
	I0911 11:29:52.292379 2238380 kubeadm.go:640] restartCluster took 23.271902488s
	I0911 11:29:52.292391 2238380 kubeadm.go:406] StartCluster complete in 23.317727721s
	I0911 11:29:52.292416 2238380 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:29:52.292508 2238380 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:29:52.293168 2238380 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:29:52.293422 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 11:29:52.293571 2238380 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false]
	I0911 11:29:52.293725 2238380 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:29:52.293792 2238380 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:29:52.297326 2238380 out.go:177] * Enabled addons: 
	I0911 11:29:52.294105 2238380 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:29:52.301394 2238380 addons.go:502] enable addons completed in 7.812564ms: enabled=[]
	I0911 11:29:52.297661 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0911 11:29:52.301460 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:52.301475 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:52.301495 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:52.304674 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:52.304704 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:52.304715 2238380 round_trippers.go:580]     Content-Length: 291
	I0911 11:29:52.304724 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:52 GMT
	I0911 11:29:52.304734 2238380 round_trippers.go:580]     Audit-Id: ff7d118c-9753-4af5-af95-3802e56ea1e7
	I0911 11:29:52.304743 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:52.304753 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:52.304761 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:52.304768 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:52.304798 2238380 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"64cee900-5094-4f8d-89de-d10f65816cce","resourceVersion":"825","creationTimestamp":"2023-09-11T11:19:21Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0911 11:29:52.305064 2238380 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-378707" context rescaled to 1 replicas
	I0911 11:29:52.305112 2238380 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 11:29:52.307158 2238380 out.go:177] * Verifying Kubernetes components...
	I0911 11:29:52.308953 2238380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:29:52.395341 2238380 command_runner.go:130] > apiVersion: v1
	I0911 11:29:52.395373 2238380 command_runner.go:130] > data:
	I0911 11:29:52.395380 2238380 command_runner.go:130] >   Corefile: |
	I0911 11:29:52.395385 2238380 command_runner.go:130] >     .:53 {
	I0911 11:29:52.395390 2238380 command_runner.go:130] >         log
	I0911 11:29:52.395398 2238380 command_runner.go:130] >         errors
	I0911 11:29:52.395403 2238380 command_runner.go:130] >         health {
	I0911 11:29:52.395409 2238380 command_runner.go:130] >            lameduck 5s
	I0911 11:29:52.395415 2238380 command_runner.go:130] >         }
	I0911 11:29:52.395421 2238380 command_runner.go:130] >         ready
	I0911 11:29:52.395429 2238380 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0911 11:29:52.395436 2238380 command_runner.go:130] >            pods insecure
	I0911 11:29:52.395445 2238380 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0911 11:29:52.395451 2238380 command_runner.go:130] >            ttl 30
	I0911 11:29:52.395457 2238380 command_runner.go:130] >         }
	I0911 11:29:52.395464 2238380 command_runner.go:130] >         prometheus :9153
	I0911 11:29:52.395471 2238380 command_runner.go:130] >         hosts {
	I0911 11:29:52.395479 2238380 command_runner.go:130] >            192.168.39.1 host.minikube.internal
	I0911 11:29:52.395486 2238380 command_runner.go:130] >            fallthrough
	I0911 11:29:52.395492 2238380 command_runner.go:130] >         }
	I0911 11:29:52.395500 2238380 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0911 11:29:52.395507 2238380 command_runner.go:130] >            max_concurrent 1000
	I0911 11:29:52.395515 2238380 command_runner.go:130] >         }
	I0911 11:29:52.395522 2238380 command_runner.go:130] >         cache 30
	I0911 11:29:52.395534 2238380 command_runner.go:130] >         loop
	I0911 11:29:52.395540 2238380 command_runner.go:130] >         reload
	I0911 11:29:52.395546 2238380 command_runner.go:130] >         loadbalance
	I0911 11:29:52.395555 2238380 command_runner.go:130] >     }
	I0911 11:29:52.395561 2238380 command_runner.go:130] > kind: ConfigMap
	I0911 11:29:52.395569 2238380 command_runner.go:130] > metadata:
	I0911 11:29:52.395576 2238380 command_runner.go:130] >   creationTimestamp: "2023-09-11T11:19:21Z"
	I0911 11:29:52.395586 2238380 command_runner.go:130] >   name: coredns
	I0911 11:29:52.395592 2238380 command_runner.go:130] >   namespace: kube-system
	I0911 11:29:52.395600 2238380 command_runner.go:130] >   resourceVersion: "382"
	I0911 11:29:52.395610 2238380 command_runner.go:130] >   uid: f37a9dec-5b61-473f-80fb-18b2584b4b79
	I0911 11:29:52.398145 2238380 node_ready.go:35] waiting up to 6m0s for node "multinode-378707" to be "Ready" ...
	I0911 11:29:52.398181 2238380 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 11:29:52.475545 2238380 request.go:629] Waited for 77.240383ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:52.475629 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:52.475645 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:52.475657 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:52.475667 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:52.478539 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:52.478567 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:52.478576 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:52 GMT
	I0911 11:29:52.478582 2238380 round_trippers.go:580]     Audit-Id: bf4638d7-03bb-40c1-8965-e8d7a715a866
	I0911 11:29:52.478587 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:52.478593 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:52.478598 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:52.478603 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:52.479314 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:52.675216 2238380 request.go:629] Waited for 195.362599ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:52.675295 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:52.675301 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:52.675312 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:52.675322 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:52.683891 2238380 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0911 11:29:52.683924 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:52.683935 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:52.683945 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:52.683953 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:52 GMT
	I0911 11:29:52.683962 2238380 round_trippers.go:580]     Audit-Id: 0377d7da-dfc0-45f7-bc8d-65ae7c2d80f8
	I0911 11:29:52.683971 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:52.683983 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:52.684118 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:53.185316 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:53.185343 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:53.185355 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:53.185361 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:53.188167 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:53.188197 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:53.188209 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:53 GMT
	I0911 11:29:53.188219 2238380 round_trippers.go:580]     Audit-Id: 2ec6c281-46b2-4742-b6e5-81809d7c9936
	I0911 11:29:53.188228 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:53.188235 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:53.188241 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:53.188246 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:53.188379 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:53.685058 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:53.685088 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:53.685100 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:53.685111 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:53.688841 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:53.688870 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:53.688881 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:53 GMT
	I0911 11:29:53.688891 2238380 round_trippers.go:580]     Audit-Id: 71c831cf-b1c5-4c0f-8cda-b559a200a904
	I0911 11:29:53.688897 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:53.688906 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:53.688914 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:53.688923 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:53.689234 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:54.184941 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:54.184968 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:54.184976 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:54.184983 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:54.188018 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:54.188042 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:54.188049 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:54.188055 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:54.188061 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:54 GMT
	I0911 11:29:54.188066 2238380 round_trippers.go:580]     Audit-Id: f7d04c46-5200-4f1b-8451-657274e6fda4
	I0911 11:29:54.188075 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:54.188080 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:54.188476 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:54.685273 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:54.685301 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:54.685310 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:54.685317 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:54.688217 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:54.688250 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:54.688261 2238380 round_trippers.go:580]     Audit-Id: e6cac259-b966-4d1e-ae0b-6010d50eb4aa
	I0911 11:29:54.688270 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:54.688278 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:54.688287 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:54.688295 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:54.688304 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:54 GMT
	I0911 11:29:54.688660 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:54.689067 2238380 node_ready.go:58] node "multinode-378707" has status "Ready":"False"
	I0911 11:29:55.185497 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:55.185527 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:55.185539 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:55.185549 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:55.188244 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:55.188270 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:55.188289 2238380 round_trippers.go:580]     Audit-Id: acfe6e1b-7e8d-4b6e-b3d6-bf18ef5ed1ec
	I0911 11:29:55.188299 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:55.188311 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:55.188323 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:55.188335 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:55.188347 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:55 GMT
	I0911 11:29:55.188642 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:55.685437 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:55.685471 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:55.685483 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:55.685492 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:55.688467 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:55.688497 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:55.688508 2238380 round_trippers.go:580]     Audit-Id: 9ab0d4a2-89f2-4399-9e6f-9dac45a83e5e
	I0911 11:29:55.688517 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:55.688524 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:55.688532 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:55.688540 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:55.688547 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:55 GMT
	I0911 11:29:55.688941 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:56.185054 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:56.185080 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:56.185089 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:56.185095 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:56.188217 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:56.188247 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:56.188257 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:56.188266 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:56.188274 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:56 GMT
	I0911 11:29:56.188282 2238380 round_trippers.go:580]     Audit-Id: 12a93618-1576-435b-a2b3-784df959d935
	I0911 11:29:56.188290 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:56.188297 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:56.188558 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:56.684967 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:56.684994 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:56.685003 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:56.685010 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:56.687885 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:56.687912 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:56.687920 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:56.687926 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:56.687931 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:56.687936 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:56 GMT
	I0911 11:29:56.687942 2238380 round_trippers.go:580]     Audit-Id: 7ed66acb-af95-4c39-955b-038a46f9f5d8
	I0911 11:29:56.687947 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:56.688409 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:57.185124 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:57.185159 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:57.185173 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:57.185183 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:57.188627 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:57.188652 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:57.188659 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:57.188667 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:57.188672 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:57.188678 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:57.188683 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:57 GMT
	I0911 11:29:57.188689 2238380 round_trippers.go:580]     Audit-Id: 0d03f3d9-e6a6-4a8f-bed2-4744ec661ce3
	I0911 11:29:57.188915 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:57.189281 2238380 node_ready.go:58] node "multinode-378707" has status "Ready":"False"
	I0911 11:29:57.685686 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:57.685718 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:57.685729 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:57.685737 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:57.688506 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:57.688535 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:57.688545 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:57 GMT
	I0911 11:29:57.688554 2238380 round_trippers.go:580]     Audit-Id: 93491b91-0444-45e2-b665-3c06fe5140b6
	I0911 11:29:57.688563 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:57.688571 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:57.688579 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:57.688588 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:57.689063 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"741","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6118 chars]
	I0911 11:29:58.185215 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:58.185239 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:58.185249 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:58.185255 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:58.188961 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:58.188994 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:58.189002 2238380 round_trippers.go:580]     Audit-Id: 4a36f868-df77-4b6d-88ef-c69db68d64c3
	I0911 11:29:58.189008 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:58.189016 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:58.189025 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:58.189034 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:58.189042 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:58 GMT
	I0911 11:29:58.189238 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:29:58.189698 2238380 node_ready.go:49] node "multinode-378707" has status "Ready":"True"
	I0911 11:29:58.189719 2238380 node_ready.go:38] duration metric: took 5.791544724s waiting for node "multinode-378707" to be "Ready" ...
	I0911 11:29:58.189732 2238380 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:29:58.189821 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:29:58.189833 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:58.189845 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:58.189859 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:58.194675 2238380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:29:58.194703 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:58.194714 2238380 round_trippers.go:580]     Audit-Id: bc93e606-f8a5-4f62-9e89-fcc2f044feac
	I0911 11:29:58.194723 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:58.194731 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:58.194739 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:58.194747 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:58.194756 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:58 GMT
	I0911 11:29:58.195985 2238380 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"871"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 82218 chars]
	I0911 11:29:58.199615 2238380 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace to be "Ready" ...
	I0911 11:29:58.199720 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:29:58.199733 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:58.199745 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:58.199758 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:58.202204 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:58.202226 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:58.202233 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:58.202239 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:58.202246 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:58.202254 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:58 GMT
	I0911 11:29:58.202263 2238380 round_trippers.go:580]     Audit-Id: 7c36d112-b288-49dd-b122-9e53b547a7eb
	I0911 11:29:58.202274 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:58.202389 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:29:58.202936 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:58.202953 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:58.202963 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:58.202969 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:58.205218 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:58.205235 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:58.205241 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:58.205250 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:58.205259 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:58.205268 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:58.205277 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:58 GMT
	I0911 11:29:58.205286 2238380 round_trippers.go:580]     Audit-Id: 59f5308f-b319-44a8-80b5-3caee9b864bd
	I0911 11:29:58.205389 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:29:58.205732 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:29:58.205743 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:58.205750 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:58.205756 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:58.207839 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:58.207864 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:58.207872 2238380 round_trippers.go:580]     Audit-Id: 1babe051-d302-4219-93f8-043da043a79d
	I0911 11:29:58.207878 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:58.207883 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:58.207889 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:58.207897 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:58.207903 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:58 GMT
	I0911 11:29:58.208172 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:29:58.208643 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:58.208655 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:58.208662 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:58.208668 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:58.210862 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:58.210880 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:58.210887 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:58 GMT
	I0911 11:29:58.210893 2238380 round_trippers.go:580]     Audit-Id: e09d3200-4e05-4009-b7ff-474e89654a55
	I0911 11:29:58.210900 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:58.210916 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:58.210925 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:58.210938 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:58.211142 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:29:58.712333 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:29:58.712362 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:58.712371 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:58.712378 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:58.716389 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:58.716418 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:58.716428 2238380 round_trippers.go:580]     Audit-Id: 12df7fe5-d0c4-407f-ab0a-fe64ef4dc154
	I0911 11:29:58.716436 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:58.716444 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:58.716452 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:58.716459 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:58.716466 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:58 GMT
	I0911 11:29:58.717032 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:29:58.717506 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:58.717517 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:58.717525 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:58.717532 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:58.720036 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:58.720059 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:58.720068 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:58 GMT
	I0911 11:29:58.720076 2238380 round_trippers.go:580]     Audit-Id: 60c38855-e725-4ad0-a706-c92a64223a74
	I0911 11:29:58.720083 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:58.720091 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:58.720099 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:58.720109 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:58.720235 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:29:59.212128 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:29:59.212155 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:59.212163 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:59.212169 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:59.215388 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:59.215414 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:59.215425 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:59 GMT
	I0911 11:29:59.215432 2238380 round_trippers.go:580]     Audit-Id: 74c0d78a-bd0a-41a5-a499-ed86ad6ac804
	I0911 11:29:59.215439 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:59.215448 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:59.215457 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:59.215466 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:59.215695 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:29:59.216288 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:59.216316 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:59.216325 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:59.216334 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:59.218746 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:59.218767 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:59.218777 2238380 round_trippers.go:580]     Audit-Id: f270278c-7267-4dd2-90b1-5c114fa7d860
	I0911 11:29:59.218785 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:59.218793 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:59.218803 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:59.218813 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:59.218824 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:59 GMT
	I0911 11:29:59.219304 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:29:59.712023 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:29:59.712052 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:59.712061 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:59.712068 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:59.715394 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:29:59.715425 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:59.715435 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:59 GMT
	I0911 11:29:59.715444 2238380 round_trippers.go:580]     Audit-Id: af4639bb-b35f-4f68-b195-6658b20a3880
	I0911 11:29:59.715453 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:59.715460 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:59.715467 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:59.715475 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:59.715743 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:29:59.716295 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:29:59.716317 2238380 round_trippers.go:469] Request Headers:
	I0911 11:29:59.716324 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:29:59.716330 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:29:59.718931 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:29:59.718953 2238380 round_trippers.go:577] Response Headers:
	I0911 11:29:59.718962 2238380 round_trippers.go:580]     Audit-Id: 0dfcac28-8ed2-4f66-9c56-e2477844e02d
	I0911 11:29:59.718971 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:29:59.718979 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:29:59.718985 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:29:59.718993 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:29:59.719002 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:29:59 GMT
	I0911 11:29:59.719183 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:00.211897 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:30:00.211926 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:00.211938 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:00.211947 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:00.216889 2238380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:30:00.216924 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:00.216942 2238380 round_trippers.go:580]     Audit-Id: 27be05b6-7d06-4e2d-9cb8-c59398b553ab
	I0911 11:30:00.216952 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:00.216960 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:00.216969 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:00.216977 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:00.216986 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:00 GMT
	I0911 11:30:00.217480 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:30:00.218003 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:00.218018 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:00.218025 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:00.218031 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:00.220768 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:00.220786 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:00.220793 2238380 round_trippers.go:580]     Audit-Id: 9f5eb965-8b0a-47ab-8631-ac72eae0d0b0
	I0911 11:30:00.220801 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:00.220809 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:00.220832 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:00.220844 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:00.220855 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:00 GMT
	I0911 11:30:00.221181 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:00.221663 2238380 pod_ready.go:102] pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace has status "Ready":"False"
	I0911 11:30:00.711812 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:30:00.711840 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:00.711849 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:00.711856 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:00.715304 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:00.715328 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:00.715336 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:00.715352 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:00.715362 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:00 GMT
	I0911 11:30:00.715370 2238380 round_trippers.go:580]     Audit-Id: efab6f12-fff2-4c31-972d-f5b0c02640e3
	I0911 11:30:00.715378 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:00.715386 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:00.715710 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:30:00.716265 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:00.716329 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:00.716348 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:00.716360 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:00.718793 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:00.718814 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:00.718825 2238380 round_trippers.go:580]     Audit-Id: e89ab074-3e50-4fad-8a24-cfb43cac8fca
	I0911 11:30:00.718835 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:00.718841 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:00.718847 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:00.718852 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:00.718857 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:00 GMT
	I0911 11:30:00.719011 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:01.212517 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:30:01.212541 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:01.212550 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:01.212557 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:01.215947 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:01.215970 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:01.215978 2238380 round_trippers.go:580]     Audit-Id: 2fca2d67-50f3-4750-9f53-92c599d7ff64
	I0911 11:30:01.215983 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:01.215989 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:01.215994 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:01.216000 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:01.216006 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:01 GMT
	I0911 11:30:01.216624 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:30:01.217124 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:01.217137 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:01.217144 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:01.217151 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:01.219622 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:01.219653 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:01.219661 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:01.219666 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:01 GMT
	I0911 11:30:01.219674 2238380 round_trippers.go:580]     Audit-Id: b3ac7c16-5b3d-4523-ab5c-a6b0ad54a455
	I0911 11:30:01.219679 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:01.219685 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:01.219690 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:01.219829 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:01.712579 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:30:01.712618 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:01.712636 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:01.712647 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:01.718934 2238380 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0911 11:30:01.718962 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:01.718972 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:01.718982 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:01 GMT
	I0911 11:30:01.718990 2238380 round_trippers.go:580]     Audit-Id: e8e64844-1825-42ce-ba59-65e2b03fe2e4
	I0911 11:30:01.718997 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:01.719006 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:01.719014 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:01.719220 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:30:01.719749 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:01.719764 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:01.719774 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:01.719784 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:01.722775 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:01.722796 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:01.722803 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:01.722809 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:01.722815 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:01.722844 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:01.722856 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:01 GMT
	I0911 11:30:01.722879 2238380 round_trippers.go:580]     Audit-Id: 9e5bf168-dec2-4e79-b05d-50cf5f008457
	I0911 11:30:01.723844 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:02.212635 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:30:02.212707 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:02.212736 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:02.212746 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:02.216805 2238380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:30:02.216851 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:02.216862 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:02.216875 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:02.216884 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:02.216892 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:02.216900 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:02 GMT
	I0911 11:30:02.216908 2238380 round_trippers.go:580]     Audit-Id: 01e9cd2e-431a-40da-b89d-18a7f7ebda9a
	I0911 11:30:02.217048 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:30:02.217557 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:02.217572 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:02.217580 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:02.217586 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:02.220135 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:02.220155 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:02.220163 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:02.220171 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:02.220177 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:02.220182 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:02.220189 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:02 GMT
	I0911 11:30:02.220198 2238380 round_trippers.go:580]     Audit-Id: f57544da-2501-4137-b484-f3f0ade01f02
	I0911 11:30:02.220403 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:02.712081 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:30:02.712109 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:02.712118 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:02.712124 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:02.715423 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:02.715446 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:02.715454 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:02.715461 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:02 GMT
	I0911 11:30:02.715466 2238380 round_trippers.go:580]     Audit-Id: 0ccce131-c6ab-467f-b44a-65feaa6e2016
	I0911 11:30:02.715472 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:02.715477 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:02.715482 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:02.715685 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:30:02.716340 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:02.716358 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:02.716366 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:02.716373 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:02.718859 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:02.718878 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:02.718886 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:02.718892 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:02 GMT
	I0911 11:30:02.718898 2238380 round_trippers.go:580]     Audit-Id: f935ae8b-f257-43d8-bd93-958c94b0ab69
	I0911 11:30:02.718904 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:02.718910 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:02.718916 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:02.719080 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:02.719497 2238380 pod_ready.go:102] pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace has status "Ready":"False"
	I0911 11:30:03.212357 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:30:03.212391 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:03.212405 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:03.212414 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:03.216369 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:03.216396 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:03.216404 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:03 GMT
	I0911 11:30:03.216409 2238380 round_trippers.go:580]     Audit-Id: db21e4b6-c46b-4a94-8d39-edcb016e95f8
	I0911 11:30:03.216416 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:03.216425 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:03.216431 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:03.216437 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:03.216675 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:30:03.217182 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:03.217196 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:03.217204 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:03.217210 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:03.220064 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:03.220082 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:03.220089 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:03.220095 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:03 GMT
	I0911 11:30:03.220101 2238380 round_trippers.go:580]     Audit-Id: 148ba5f5-4fc2-45d1-ac40-2a419f44154f
	I0911 11:30:03.220111 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:03.220120 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:03.220130 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:03.220479 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:03.712144 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:30:03.712171 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:03.712180 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:03.712187 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:03.715808 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:03.715832 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:03.715840 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:03.715845 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:03.715850 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:03.715856 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:03.715862 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:03 GMT
	I0911 11:30:03.715867 2238380 round_trippers.go:580]     Audit-Id: a55bc31a-c36e-42b4-abe2-4faa255dc078
	I0911 11:30:03.716098 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:30:03.716617 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:03.716632 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:03.716640 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:03.716646 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:03.720162 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:03.720182 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:03.720192 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:03.720200 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:03 GMT
	I0911 11:30:03.720208 2238380 round_trippers.go:580]     Audit-Id: 4ab0824f-68f6-4a19-8b0e-50d0bed4b026
	I0911 11:30:03.720216 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:03.720225 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:03.720241 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:03.720587 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:04.212380 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:30:04.212417 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.212429 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.212439 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.215756 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:04.215784 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.215792 2238380 round_trippers.go:580]     Audit-Id: e8e5cc1f-5d46-4158-af5c-137cc696ec59
	I0911 11:30:04.215801 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.215810 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.215817 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.215825 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.215833 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.216011 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"746","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6370 chars]
	I0911 11:30:04.216498 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:04.216510 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.216517 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.216524 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.218897 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:04.218924 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.218935 2238380 round_trippers.go:580]     Audit-Id: cab1a382-66aa-4c78-bf82-8a79a9c35aa7
	I0911 11:30:04.218943 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.218948 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.218956 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.218962 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.218967 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.219060 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:04.711645 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:30:04.711677 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.711687 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.711693 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.714545 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:04.714566 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.714573 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.714579 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.714586 2238380 round_trippers.go:580]     Audit-Id: 41be83cc-6883-49d7-91b0-ebd07ebbec76
	I0911 11:30:04.714596 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.714603 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.714610 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.715054 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"893","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0911 11:30:04.715513 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:04.715525 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.715533 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.715539 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.718058 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:04.718080 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.718091 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.718100 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.718108 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.718118 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.718126 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.718135 2238380 round_trippers.go:580]     Audit-Id: aca568ab-ac86-4bab-9f55-fa3d4c4f203c
	I0911 11:30:04.718487 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:04.718798 2238380 pod_ready.go:92] pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace has status "Ready":"True"
	I0911 11:30:04.718812 2238380 pod_ready.go:81] duration metric: took 6.519170192s waiting for pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:04.718821 2238380 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:04.718872 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-378707
	I0911 11:30:04.718883 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.718890 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.718897 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.721370 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:04.721387 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.721396 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.721406 2238380 round_trippers.go:580]     Audit-Id: 2dab4f1c-3b15-490b-adac-72088ea2eef2
	I0911 11:30:04.721415 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.721427 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.721435 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.721441 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.721601 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-378707","namespace":"kube-system","uid":"30882221-42a4-42a4-9911-63a8ff26c903","resourceVersion":"885","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.237:2379","kubernetes.io/config.hash":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.mirror":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.seen":"2023-09-11T11:19:21.954681050Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0911 11:30:04.722016 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:04.722030 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.722038 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.722047 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.723979 2238380 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:30:04.723996 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.724016 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.724029 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.724042 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.724055 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.724068 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.724081 2238380 round_trippers.go:580]     Audit-Id: 69fe220c-8e74-4f27-9e08-7173eb7ffc68
	I0911 11:30:04.724205 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:04.724538 2238380 pod_ready.go:92] pod "etcd-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:30:04.724553 2238380 pod_ready.go:81] duration metric: took 5.726981ms waiting for pod "etcd-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:04.724576 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:04.724645 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-378707
	I0911 11:30:04.724656 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.724667 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.724680 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.726758 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:04.726788 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.726795 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.726800 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.726806 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.726811 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.726819 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.726824 2238380 round_trippers.go:580]     Audit-Id: 59a0b3a4-b5ae-4484-aa28-11bb78ed57e0
	I0911 11:30:04.727670 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-378707","namespace":"kube-system","uid":"6cc96039-3a17-4243-93b6-4bf3ed6f69a8","resourceVersion":"861","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.237:8443","kubernetes.io/config.hash":"4ac3958118ce3f6e7dda52fe654787ec","kubernetes.io/config.mirror":"4ac3958118ce3f6e7dda52fe654787ec","kubernetes.io/config.seen":"2023-09-11T11:19:21.954683933Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0911 11:30:04.728052 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:04.728063 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.728070 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.728076 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.730488 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:04.730508 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.730516 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.730521 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.730527 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.730534 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.730547 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.730560 2238380 round_trippers.go:580]     Audit-Id: 80d98fe6-96c1-4a53-942c-d1c0cf40da4b
	I0911 11:30:04.730689 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:04.730953 2238380 pod_ready.go:92] pod "kube-apiserver-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:30:04.730965 2238380 pod_ready.go:81] duration metric: took 6.377747ms waiting for pod "kube-apiserver-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:04.730975 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:04.731030 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-378707
	I0911 11:30:04.731037 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.731044 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.731053 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.734569 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:04.734590 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.734598 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.734604 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.734611 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.734620 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.734635 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.734640 2238380 round_trippers.go:580]     Audit-Id: 81a8b857-454a-48d5-b5d7-8c45337da328
	I0911 11:30:04.735494 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-378707","namespace":"kube-system","uid":"7bd2ecf1-1558-4680-9075-d30d989a0568","resourceVersion":"859","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee5490370c5fc8b73824fd7337130039","kubernetes.io/config.mirror":"ee5490370c5fc8b73824fd7337130039","kubernetes.io/config.seen":"2023-09-11T11:19:21.954684910Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0911 11:30:04.735874 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:04.735884 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.735892 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.735898 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.738607 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:04.738625 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.738632 2238380 round_trippers.go:580]     Audit-Id: 7393bf04-659a-4428-b725-7c6c4e9d4943
	I0911 11:30:04.738637 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.738642 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.738648 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.738653 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.738658 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.739183 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:04.739461 2238380 pod_ready.go:92] pod "kube-controller-manager-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:30:04.739474 2238380 pod_ready.go:81] duration metric: took 8.489398ms waiting for pod "kube-controller-manager-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:04.739484 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8gcxx" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:04.739527 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gcxx
	I0911 11:30:04.739537 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.739544 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.739550 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.743406 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:04.743430 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.743438 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.743446 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.743456 2238380 round_trippers.go:580]     Audit-Id: 89d002dc-a596-4fc2-8a39-538c7724e5d5
	I0911 11:30:04.743464 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.743472 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.743491 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.743571 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8gcxx","generateName":"kube-proxy-","namespace":"kube-system","uid":"f1bb96ad-eb2c-4eeb-b1f8-abb67568b5e7","resourceVersion":"506","creationTimestamp":"2023-09-11T11:20:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0911 11:30:04.743961 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:30:04.743973 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.743980 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.743988 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.746234 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:04.746255 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.746264 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.746273 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.746281 2238380 round_trippers.go:580]     Audit-Id: a8ca1eeb-3113-464a-b04e-3dd7ab06927f
	I0911 11:30:04.746290 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.746297 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.746306 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.746383 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"53376bc7-b94e-4f8b-bce2-026875c17588","resourceVersion":"737","creationTimestamp":"2023-09-11T11:20:16Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:16Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3684 chars]
	I0911 11:30:04.746611 2238380 pod_ready.go:92] pod "kube-proxy-8gcxx" in "kube-system" namespace has status "Ready":"True"
	I0911 11:30:04.746623 2238380 pod_ready.go:81] duration metric: took 7.134859ms waiting for pod "kube-proxy-8gcxx" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:04.746632 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kwvbm" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:04.912098 2238380 request.go:629] Waited for 165.37867ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kwvbm
	I0911 11:30:04.912193 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kwvbm
	I0911 11:30:04.912198 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:04.912206 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:04.912213 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:04.915226 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:04.915248 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:04.915255 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:04.915261 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:04 GMT
	I0911 11:30:04.915272 2238380 round_trippers.go:580]     Audit-Id: f1fee0b1-748a-4135-81b6-f58489fe2021
	I0911 11:30:04.915278 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:04.915283 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:04.915288 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:04.915541 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kwvbm","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a1764e3-ef89-4687-874e-03baf3e90296","resourceVersion":"711","creationTimestamp":"2023-09-11T11:21:07Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:21:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0911 11:30:05.112460 2238380 request.go:629] Waited for 196.433239ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m03
	I0911 11:30:05.112546 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m03
	I0911 11:30:05.112551 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:05.112560 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:05.112567 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:05.115628 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:05.115655 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:05.115664 2238380 round_trippers.go:580]     Audit-Id: 303685d4-602b-4c1c-892f-03e10e78927b
	I0911 11:30:05.115677 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:05.115686 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:05.115694 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:05.115711 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:05.115719 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:05 GMT
	I0911 11:30:05.115968 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m03","uid":"b37f601c-a45d-4f04-b0fa-26387559968e","resourceVersion":"736","creationTimestamp":"2023-09-11T11:21:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:21:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0911 11:30:05.116250 2238380 pod_ready.go:92] pod "kube-proxy-kwvbm" in "kube-system" namespace has status "Ready":"True"
	I0911 11:30:05.116264 2238380 pod_ready.go:81] duration metric: took 369.625683ms waiting for pod "kube-proxy-kwvbm" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:05.116274 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-snbc8" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:05.311681 2238380 request.go:629] Waited for 195.310844ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-snbc8
	I0911 11:30:05.311751 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-snbc8
	I0911 11:30:05.311756 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:05.311764 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:05.311774 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:05.314827 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:05.314860 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:05.314872 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:05.314880 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:05.314888 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:05 GMT
	I0911 11:30:05.314898 2238380 round_trippers.go:580]     Audit-Id: 4eab9a26-35c7-4a7c-b65f-06137a79f972
	I0911 11:30:05.314908 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:05.314916 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:05.315085 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-snbc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"c3bb9995-3cd6-4433-a326-3da0a7f4aff3","resourceVersion":"826","creationTimestamp":"2023-09-11T11:19:35Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0911 11:30:05.512127 2238380 request.go:629] Waited for 196.46083ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:05.512212 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:05.512217 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:05.512231 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:05.512238 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:05.517071 2238380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:30:05.517103 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:05.517115 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:05.517124 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:05.517130 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:05.517136 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:05.517141 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:05 GMT
	I0911 11:30:05.517147 2238380 round_trippers.go:580]     Audit-Id: 35cc4ead-d29b-4c12-b18f-ff39218a5871
	I0911 11:30:05.517343 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:05.517695 2238380 pod_ready.go:92] pod "kube-proxy-snbc8" in "kube-system" namespace has status "Ready":"True"
	I0911 11:30:05.517711 2238380 pod_ready.go:81] duration metric: took 401.430733ms waiting for pod "kube-proxy-snbc8" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:05.517721 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:05.712256 2238380 request.go:629] Waited for 194.405469ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-378707
	I0911 11:30:05.712332 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-378707
	I0911 11:30:05.712337 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:05.712345 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:05.712366 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:05.715218 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:30:05.715240 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:05.715247 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:05.715252 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:05 GMT
	I0911 11:30:05.715258 2238380 round_trippers.go:580]     Audit-Id: 27bfb4fd-db1c-4d0a-9ab9-f1a0fc7a1396
	I0911 11:30:05.715263 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:05.715269 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:05.715274 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:05.715384 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-378707","namespace":"kube-system","uid":"51055ddb-deff-4b5d-9a90-0cd2c9dc8aa7","resourceVersion":"867","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"47ac46ded21e848957a0f2d3767001da","kubernetes.io/config.mirror":"47ac46ded21e848957a0f2d3767001da","kubernetes.io/config.seen":"2023-09-11T11:19:21.954685589Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0911 11:30:05.912322 2238380 request.go:629] Waited for 196.506078ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:05.912396 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:30:05.912401 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:05.912409 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:05.912421 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:05.915695 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:05.915725 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:05.915737 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:05.915746 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:05.915755 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:05.915763 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:05 GMT
	I0911 11:30:05.915771 2238380 round_trippers.go:580]     Audit-Id: e516778e-f3db-45c0-b125-4282e9d74a16
	I0911 11:30:05.915780 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:05.915896 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 5942 chars]
	I0911 11:30:05.916267 2238380 pod_ready.go:92] pod "kube-scheduler-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:30:05.916283 2238380 pod_ready.go:81] duration metric: took 398.555001ms waiting for pod "kube-scheduler-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:30:05.916293 2238380 pod_ready.go:38] duration metric: took 7.726545245s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:30:05.916309 2238380 api_server.go:52] waiting for apiserver process to appear ...
	I0911 11:30:05.916363 2238380 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:30:05.930672 2238380 command_runner.go:130] > 1108
	I0911 11:30:05.930727 2238380 api_server.go:72] duration metric: took 13.62556998s to wait for apiserver process to appear ...
	I0911 11:30:05.930737 2238380 api_server.go:88] waiting for apiserver healthz status ...
	I0911 11:30:05.930753 2238380 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0911 11:30:05.936157 2238380 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0911 11:30:05.936240 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/version
	I0911 11:30:05.936247 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:05.936256 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:05.936263 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:05.937514 2238380 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:30:05.937536 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:05.937542 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:05 GMT
	I0911 11:30:05.937547 2238380 round_trippers.go:580]     Audit-Id: ffec3076-9a71-46e0-860e-0a18c9cdf078
	I0911 11:30:05.937553 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:05.937558 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:05.937564 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:05.937572 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:05.937584 2238380 round_trippers.go:580]     Content-Length: 263
	I0911 11:30:05.937624 2238380 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "28",
	  "gitVersion": "v1.28.1",
	  "gitCommit": "8dc49c4b984b897d423aab4971090e1879eb4f23",
	  "gitTreeState": "clean",
	  "buildDate": "2023-08-24T11:16:30Z",
	  "goVersion": "go1.20.7",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0911 11:30:05.937667 2238380 api_server.go:141] control plane version: v1.28.1
	I0911 11:30:05.937681 2238380 api_server.go:131] duration metric: took 6.93932ms to wait for apiserver health ...
	I0911 11:30:05.937690 2238380 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 11:30:06.112179 2238380 request.go:629] Waited for 174.390418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:30:06.112274 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:30:06.112279 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:06.112294 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:06.112301 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:06.116880 2238380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:30:06.116910 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:06.116920 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:06.116930 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:06 GMT
	I0911 11:30:06.116938 2238380 round_trippers.go:580]     Audit-Id: 228f5aae-d3da-47f1-bab7-4c3119c0f0c1
	I0911 11:30:06.116946 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:06.116954 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:06.116961 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:06.118646 2238380 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"903"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"893","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81889 chars]
	I0911 11:30:06.121219 2238380 system_pods.go:59] 12 kube-system pods found
	I0911 11:30:06.121242 2238380 system_pods.go:61] "coredns-5dd5756b68-fzpjk" [f72f6ba0-92a3-4108-a37f-e6ad5009c37c] Running
	I0911 11:30:06.121247 2238380 system_pods.go:61] "etcd-multinode-378707" [30882221-42a4-42a4-9911-63a8ff26c903] Running
	I0911 11:30:06.121251 2238380 system_pods.go:61] "kindnet-gxpnd" [e59da67c-e818-45db-bbcd-db99a4310bf1] Running
	I0911 11:30:06.121259 2238380 system_pods.go:61] "kindnet-lrktz" [980de8e0-df33-41d2-847f-3f600dfcc611] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0911 11:30:06.121266 2238380 system_pods.go:61] "kindnet-p8h9v" [81e27af1-dd2f-464f-9daf-e1bdf9f1bdf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0911 11:30:06.121273 2238380 system_pods.go:61] "kube-apiserver-multinode-378707" [6cc96039-3a17-4243-93b6-4bf3ed6f69a8] Running
	I0911 11:30:06.121280 2238380 system_pods.go:61] "kube-controller-manager-multinode-378707" [7bd2ecf1-1558-4680-9075-d30d989a0568] Running
	I0911 11:30:06.121284 2238380 system_pods.go:61] "kube-proxy-8gcxx" [f1bb96ad-eb2c-4eeb-b1f8-abb67568b5e7] Running
	I0911 11:30:06.121288 2238380 system_pods.go:61] "kube-proxy-kwvbm" [6a1764e3-ef89-4687-874e-03baf3e90296] Running
	I0911 11:30:06.121294 2238380 system_pods.go:61] "kube-proxy-snbc8" [c3bb9995-3cd6-4433-a326-3da0a7f4aff3] Running
	I0911 11:30:06.121300 2238380 system_pods.go:61] "kube-scheduler-multinode-378707" [51055ddb-deff-4b5d-9a90-0cd2c9dc8aa7] Running
	I0911 11:30:06.121304 2238380 system_pods.go:61] "storage-provisioner" [77e1a93d-fc34-4f05-8320-169bb6c93e46] Running
	I0911 11:30:06.121311 2238380 system_pods.go:74] duration metric: took 183.612188ms to wait for pod list to return data ...
	I0911 11:30:06.121321 2238380 default_sa.go:34] waiting for default service account to be created ...
	I0911 11:30:06.311723 2238380 request.go:629] Waited for 190.313197ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/default/serviceaccounts
	I0911 11:30:06.311794 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/default/serviceaccounts
	I0911 11:30:06.311799 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:06.311808 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:06.311814 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:06.315043 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:06.315070 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:06.315078 2238380 round_trippers.go:580]     Audit-Id: 087585f9-163a-4c2d-9ac3-1434c3e7bf08
	I0911 11:30:06.315084 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:06.315092 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:06.315097 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:06.315103 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:06.315109 2238380 round_trippers.go:580]     Content-Length: 261
	I0911 11:30:06.315114 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:06 GMT
	I0911 11:30:06.315148 2238380 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"903"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"86e8c023-176f-41fb-9ec0-ea8561fe161a","resourceVersion":"334","creationTimestamp":"2023-09-11T11:19:34Z"}}]}
	I0911 11:30:06.315335 2238380 default_sa.go:45] found service account: "default"
	I0911 11:30:06.315349 2238380 default_sa.go:55] duration metric: took 194.022916ms for default service account to be created ...
	I0911 11:30:06.315357 2238380 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 11:30:06.511774 2238380 request.go:629] Waited for 196.335346ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:30:06.511867 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:30:06.511873 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:06.511882 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:06.511889 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:06.516223 2238380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:30:06.516252 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:06.516264 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:06.516274 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:06.516282 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:06.516291 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:06.516299 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:06 GMT
	I0911 11:30:06.516307 2238380 round_trippers.go:580]     Audit-Id: 4b762e46-d80e-4ff1-867c-d3277e13c46f
	I0911 11:30:06.518013 2238380 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"903"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"893","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f
:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{ [truncated 81889 chars]
	I0911 11:30:06.520478 2238380 system_pods.go:86] 12 kube-system pods found
	I0911 11:30:06.520500 2238380 system_pods.go:89] "coredns-5dd5756b68-fzpjk" [f72f6ba0-92a3-4108-a37f-e6ad5009c37c] Running
	I0911 11:30:06.520505 2238380 system_pods.go:89] "etcd-multinode-378707" [30882221-42a4-42a4-9911-63a8ff26c903] Running
	I0911 11:30:06.520509 2238380 system_pods.go:89] "kindnet-gxpnd" [e59da67c-e818-45db-bbcd-db99a4310bf1] Running
	I0911 11:30:06.520516 2238380 system_pods.go:89] "kindnet-lrktz" [980de8e0-df33-41d2-847f-3f600dfcc611] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0911 11:30:06.520525 2238380 system_pods.go:89] "kindnet-p8h9v" [81e27af1-dd2f-464f-9daf-e1bdf9f1bdf3] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0911 11:30:06.520530 2238380 system_pods.go:89] "kube-apiserver-multinode-378707" [6cc96039-3a17-4243-93b6-4bf3ed6f69a8] Running
	I0911 11:30:06.520538 2238380 system_pods.go:89] "kube-controller-manager-multinode-378707" [7bd2ecf1-1558-4680-9075-d30d989a0568] Running
	I0911 11:30:06.520543 2238380 system_pods.go:89] "kube-proxy-8gcxx" [f1bb96ad-eb2c-4eeb-b1f8-abb67568b5e7] Running
	I0911 11:30:06.520546 2238380 system_pods.go:89] "kube-proxy-kwvbm" [6a1764e3-ef89-4687-874e-03baf3e90296] Running
	I0911 11:30:06.520550 2238380 system_pods.go:89] "kube-proxy-snbc8" [c3bb9995-3cd6-4433-a326-3da0a7f4aff3] Running
	I0911 11:30:06.520556 2238380 system_pods.go:89] "kube-scheduler-multinode-378707" [51055ddb-deff-4b5d-9a90-0cd2c9dc8aa7] Running
	I0911 11:30:06.520560 2238380 system_pods.go:89] "storage-provisioner" [77e1a93d-fc34-4f05-8320-169bb6c93e46] Running
	I0911 11:30:06.520567 2238380 system_pods.go:126] duration metric: took 205.204454ms to wait for k8s-apps to be running ...
	I0911 11:30:06.520576 2238380 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:30:06.520621 2238380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:30:06.534828 2238380 system_svc.go:56] duration metric: took 14.239805ms WaitForService to wait for kubelet.
	I0911 11:30:06.534857 2238380 kubeadm.go:581] duration metric: took 14.229700027s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:30:06.534880 2238380 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:30:06.712386 2238380 request.go:629] Waited for 177.376729ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes
	I0911 11:30:06.712564 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes
	I0911 11:30:06.712583 2238380 round_trippers.go:469] Request Headers:
	I0911 11:30:06.712627 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:30:06.712647 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:30:06.716088 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:30:06.716115 2238380 round_trippers.go:577] Response Headers:
	I0911 11:30:06.716126 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:30:06 GMT
	I0911 11:30:06.716135 2238380 round_trippers.go:580]     Audit-Id: 6c023eb8-0bab-4977-9745-f8a13bf9b8c8
	I0911 11:30:06.716143 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:30:06.716151 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:30:06.716164 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:30:06.716174 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:30:06.716871 2238380 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"903"},"items":[{"metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"868","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields
":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":" [truncated 15076 chars]
	I0911 11:30:06.717524 2238380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:30:06.717639 2238380 node_conditions.go:123] node cpu capacity is 2
	I0911 11:30:06.717674 2238380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:30:06.717687 2238380 node_conditions.go:123] node cpu capacity is 2
	I0911 11:30:06.717693 2238380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:30:06.717702 2238380 node_conditions.go:123] node cpu capacity is 2
	I0911 11:30:06.717711 2238380 node_conditions.go:105] duration metric: took 182.826129ms to run NodePressure ...
	I0911 11:30:06.717730 2238380 start.go:228] waiting for startup goroutines ...
	I0911 11:30:06.717741 2238380 start.go:233] waiting for cluster config update ...
	I0911 11:30:06.717749 2238380 start.go:242] writing updated cluster config ...
	I0911 11:30:06.718257 2238380 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:30:06.718377 2238380 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/config.json ...
	I0911 11:30:06.721695 2238380 out.go:177] * Starting worker node multinode-378707-m02 in cluster multinode-378707
	I0911 11:30:06.723070 2238380 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:30:06.723105 2238380 cache.go:57] Caching tarball of preloaded images
	I0911 11:30:06.723239 2238380 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 11:30:06.723251 2238380 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 11:30:06.723395 2238380 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/config.json ...
	I0911 11:30:06.723585 2238380 start.go:365] acquiring machines lock for multinode-378707-m02: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 11:30:06.723630 2238380 start.go:369] acquired machines lock for "multinode-378707-m02" in 24.879µs
	I0911 11:30:06.723649 2238380 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:30:06.723657 2238380 fix.go:54] fixHost starting: m02
	I0911 11:30:06.723962 2238380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:30:06.723986 2238380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:30:06.739229 2238380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34195
	I0911 11:30:06.739676 2238380 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:30:06.740186 2238380 main.go:141] libmachine: Using API Version  1
	I0911 11:30:06.740202 2238380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:30:06.740494 2238380 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:30:06.740711 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:30:06.740892 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetState
	I0911 11:30:06.742641 2238380 fix.go:102] recreateIfNeeded on multinode-378707-m02: state=Running err=<nil>
	W0911 11:30:06.742665 2238380 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:30:06.744828 2238380 out.go:177] * Updating the running kvm2 "multinode-378707-m02" VM ...
	I0911 11:30:06.746412 2238380 machine.go:88] provisioning docker machine ...
	I0911 11:30:06.746440 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:30:06.746675 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetMachineName
	I0911 11:30:06.746942 2238380 buildroot.go:166] provisioning hostname "multinode-378707-m02"
	I0911 11:30:06.746967 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetMachineName
	I0911 11:30:06.747172 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:30:06.749620 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:30:06.750111 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:30:06.750142 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:30:06.750313 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:30:06.750486 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:30:06.750628 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:30:06.750733 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:30:06.750857 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:30:06.751299 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0911 11:30:06.751316 2238380 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-378707-m02 && echo "multinode-378707-m02" | sudo tee /etc/hostname
	I0911 11:30:06.894476 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-378707-m02
	
	I0911 11:30:06.894513 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:30:06.897650 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:30:06.898063 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:30:06.898099 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:30:06.898254 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:30:06.898452 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:30:06.898689 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:30:06.898883 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:30:06.899047 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:30:06.899494 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0911 11:30:06.899515 2238380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-378707-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-378707-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-378707-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:30:07.026266 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:30:07.026300 2238380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 11:30:07.026319 2238380 buildroot.go:174] setting up certificates
	I0911 11:30:07.026333 2238380 provision.go:83] configureAuth start
	I0911 11:30:07.026342 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetMachineName
	I0911 11:30:07.026669 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetIP
	I0911 11:30:07.029689 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:30:07.030155 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:30:07.030193 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:30:07.030321 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:30:07.032580 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:30:07.032929 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:30:07.032961 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:30:07.033071 2238380 provision.go:138] copyHostCerts
	I0911 11:30:07.033106 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:30:07.033139 2238380 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 11:30:07.033148 2238380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:30:07.033216 2238380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 11:30:07.033293 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:30:07.033309 2238380 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 11:30:07.033316 2238380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:30:07.033339 2238380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 11:30:07.033386 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:30:07.033406 2238380 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 11:30:07.033409 2238380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:30:07.033429 2238380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 11:30:07.033476 2238380 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.multinode-378707-m02 san=[192.168.39.220 192.168.39.220 localhost 127.0.0.1 minikube multinode-378707-m02]
	I0911 11:30:07.154015 2238380 provision.go:172] copyRemoteCerts
	I0911 11:30:07.154092 2238380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:30:07.154131 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:30:07.157221 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:30:07.157733 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:30:07.157765 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:30:07.158039 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:30:07.158283 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:30:07.158438 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:30:07.158586 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/id_rsa Username:docker}
	I0911 11:30:07.253090 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0911 11:30:07.253177 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 11:30:07.279939 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0911 11:30:07.280032 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:30:07.304373 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0911 11:30:07.304465 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0911 11:30:07.329088 2238380 provision.go:86] duration metric: configureAuth took 302.740636ms
	I0911 11:30:07.329118 2238380 buildroot.go:189] setting minikube options for container-runtime
	I0911 11:30:07.329419 2238380 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:30:07.329605 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:30:07.332431 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:30:07.332858 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:30:07.332893 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:30:07.333082 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:30:07.333276 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:30:07.333471 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:30:07.333622 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:30:07.333793 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:30:07.334190 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0911 11:30:07.334208 2238380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:31:37.946615 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:31:37.946655 2238380 machine.go:91] provisioned docker machine in 1m31.200226899s
	I0911 11:31:37.946690 2238380 start.go:300] post-start starting for "multinode-378707-m02" (driver="kvm2")
	I0911 11:31:37.946707 2238380 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:31:37.946745 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:31:37.947145 2238380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:31:37.947193 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:31:37.950239 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:31:37.950793 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:31:37.950834 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:31:37.951001 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:31:37.951208 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:31:37.951382 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:31:37.951525 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/id_rsa Username:docker}
	I0911 11:31:38.044067 2238380 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:31:38.048644 2238380 command_runner.go:130] > NAME=Buildroot
	I0911 11:31:38.048674 2238380 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0911 11:31:38.048680 2238380 command_runner.go:130] > ID=buildroot
	I0911 11:31:38.048685 2238380 command_runner.go:130] > VERSION_ID=2021.02.12
	I0911 11:31:38.048689 2238380 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0911 11:31:38.048728 2238380 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 11:31:38.048746 2238380 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 11:31:38.048862 2238380 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 11:31:38.048958 2238380 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 11:31:38.048971 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> /etc/ssl/certs/22224712.pem
	I0911 11:31:38.049074 2238380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:31:38.057967 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:31:38.082376 2238380 start.go:303] post-start completed in 135.664647ms
	I0911 11:31:38.082410 2238380 fix.go:56] fixHost completed within 1m31.358752643s
	I0911 11:31:38.082441 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:31:38.085503 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:31:38.086047 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:31:38.086086 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:31:38.086290 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:31:38.086502 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:31:38.086646 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:31:38.086766 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:31:38.087012 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:31:38.087424 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.220 22 <nil> <nil>}
	I0911 11:31:38.087437 2238380 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 11:31:38.213950 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694431898.207772912
	
	I0911 11:31:38.213980 2238380 fix.go:206] guest clock: 1694431898.207772912
	I0911 11:31:38.213988 2238380 fix.go:219] Guest: 2023-09-11 11:31:38.207772912 +0000 UTC Remote: 2023-09-11 11:31:38.082415393 +0000 UTC m=+455.330975153 (delta=125.357519ms)
	I0911 11:31:38.214004 2238380 fix.go:190] guest clock delta is within tolerance: 125.357519ms
	I0911 11:31:38.214010 2238380 start.go:83] releasing machines lock for "multinode-378707-m02", held for 1m31.490366236s
	I0911 11:31:38.214041 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:31:38.214323 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetIP
	I0911 11:31:38.217241 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:31:38.217671 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:31:38.217702 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:31:38.219764 2238380 out.go:177] * Found network options:
	I0911 11:31:38.221281 2238380 out.go:177]   - NO_PROXY=192.168.39.237
	W0911 11:31:38.222553 2238380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0911 11:31:38.222646 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:31:38.223233 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:31:38.223441 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:31:38.223538 2238380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:31:38.223595 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	W0911 11:31:38.223655 2238380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0911 11:31:38.223747 2238380 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:31:38.223770 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:31:38.226387 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:31:38.226675 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:31:38.226715 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:31:38.226748 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:31:38.226854 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:31:38.227047 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:31:38.227132 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:31:38.227168 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:31:38.227217 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:31:38.227383 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:31:38.227412 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/id_rsa Username:docker}
	I0911 11:31:38.227544 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:31:38.227683 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:31:38.227813 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/id_rsa Username:docker}
	I0911 11:31:38.478807 2238380 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0911 11:31:38.478848 2238380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:31:38.485315 2238380 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0911 11:31:38.485383 2238380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 11:31:38.485455 2238380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:31:38.494037 2238380 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0911 11:31:38.494061 2238380 start.go:466] detecting cgroup driver to use...
	I0911 11:31:38.494159 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:31:38.507949 2238380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:31:38.520527 2238380 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:31:38.520592 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:31:38.533499 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:31:38.546060 2238380 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:31:38.678760 2238380 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:31:38.808293 2238380 docker.go:212] disabling docker service ...
	I0911 11:31:38.808388 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:31:38.824398 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:31:38.837237 2238380 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:31:38.957438 2238380 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:31:39.088846 2238380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:31:39.102217 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:31:39.120340 2238380 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0911 11:31:39.120559 2238380 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:31:39.120624 2238380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:31:39.131858 2238380 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:31:39.131928 2238380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:31:39.142746 2238380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:31:39.153187 2238380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:31:39.163381 2238380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:31:39.174328 2238380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:31:39.183062 2238380 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0911 11:31:39.183151 2238380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:31:39.191777 2238380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:31:39.306659 2238380 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:31:39.544986 2238380 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:31:39.545083 2238380 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:31:39.550361 2238380 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0911 11:31:39.550394 2238380 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0911 11:31:39.550404 2238380 command_runner.go:130] > Device: 16h/22d	Inode: 1218        Links: 1
	I0911 11:31:39.550415 2238380 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:31:39.550423 2238380 command_runner.go:130] > Access: 2023-09-11 11:31:39.465026571 +0000
	I0911 11:31:39.550435 2238380 command_runner.go:130] > Modify: 2023-09-11 11:31:39.465026571 +0000
	I0911 11:31:39.550442 2238380 command_runner.go:130] > Change: 2023-09-11 11:31:39.465026571 +0000
	I0911 11:31:39.550449 2238380 command_runner.go:130] >  Birth: -
	I0911 11:31:39.550684 2238380 start.go:534] Will wait 60s for crictl version
	I0911 11:31:39.550740 2238380 ssh_runner.go:195] Run: which crictl
	I0911 11:31:39.555247 2238380 command_runner.go:130] > /usr/bin/crictl
	I0911 11:31:39.555327 2238380 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:31:39.595944 2238380 command_runner.go:130] > Version:  0.1.0
	I0911 11:31:39.595980 2238380 command_runner.go:130] > RuntimeName:  cri-o
	I0911 11:31:39.595987 2238380 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0911 11:31:39.595995 2238380 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0911 11:31:39.597116 2238380 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 11:31:39.597209 2238380 ssh_runner.go:195] Run: crio --version
	I0911 11:31:39.648357 2238380 command_runner.go:130] > crio version 1.24.1
	I0911 11:31:39.648387 2238380 command_runner.go:130] > Version:          1.24.1
	I0911 11:31:39.648394 2238380 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0911 11:31:39.648399 2238380 command_runner.go:130] > GitTreeState:     dirty
	I0911 11:31:39.648405 2238380 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0911 11:31:39.648409 2238380 command_runner.go:130] > GoVersion:        go1.19.9
	I0911 11:31:39.648413 2238380 command_runner.go:130] > Compiler:         gc
	I0911 11:31:39.648417 2238380 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:31:39.648424 2238380 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:31:39.648431 2238380 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:31:39.648436 2238380 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:31:39.648440 2238380 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:31:39.650194 2238380 ssh_runner.go:195] Run: crio --version
	I0911 11:31:39.700530 2238380 command_runner.go:130] > crio version 1.24.1
	I0911 11:31:39.700563 2238380 command_runner.go:130] > Version:          1.24.1
	I0911 11:31:39.700573 2238380 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0911 11:31:39.700598 2238380 command_runner.go:130] > GitTreeState:     dirty
	I0911 11:31:39.700607 2238380 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0911 11:31:39.700626 2238380 command_runner.go:130] > GoVersion:        go1.19.9
	I0911 11:31:39.700633 2238380 command_runner.go:130] > Compiler:         gc
	I0911 11:31:39.700640 2238380 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:31:39.700652 2238380 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:31:39.700665 2238380 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:31:39.700679 2238380 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:31:39.700683 2238380 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:31:39.704240 2238380 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 11:31:39.705826 2238380 out.go:177]   - env NO_PROXY=192.168.39.237
	I0911 11:31:39.707333 2238380 main.go:141] libmachine: (multinode-378707-m02) Calling .GetIP
	I0911 11:31:39.710555 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:31:39.711130 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:31:39.711158 2238380 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:31:39.711458 2238380 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 11:31:39.716063 2238380 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0911 11:31:39.716167 2238380 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707 for IP: 192.168.39.220
	I0911 11:31:39.716203 2238380 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:31:39.716399 2238380 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 11:31:39.716450 2238380 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 11:31:39.716469 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0911 11:31:39.716482 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0911 11:31:39.716495 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0911 11:31:39.716507 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0911 11:31:39.716579 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 11:31:39.716612 2238380 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 11:31:39.716620 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 11:31:39.716642 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:31:39.716669 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:31:39.716691 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 11:31:39.716730 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:31:39.716755 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem -> /usr/share/ca-certificates/2222471.pem
	I0911 11:31:39.716766 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> /usr/share/ca-certificates/22224712.pem
	I0911 11:31:39.716779 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:31:39.717230 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:31:39.742629 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 11:31:39.767001 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:31:39.790793 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:31:39.814399 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 11:31:39.838278 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 11:31:39.862606 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:31:39.887963 2238380 ssh_runner.go:195] Run: openssl version
	I0911 11:31:39.894302 2238380 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0911 11:31:39.894470 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:31:39.907285 2238380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:31:39.912186 2238380 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:31:39.912266 2238380 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:31:39.912328 2238380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:31:39.918311 2238380 command_runner.go:130] > b5213941
	I0911 11:31:39.918395 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:31:39.928579 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 11:31:39.941912 2238380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 11:31:39.946531 2238380 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:31:39.946716 2238380 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:31:39.946770 2238380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 11:31:39.952543 2238380 command_runner.go:130] > 51391683
	I0911 11:31:39.952633 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 11:31:39.962409 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 11:31:39.973395 2238380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 11:31:39.978508 2238380 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:31:39.978561 2238380 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:31:39.978630 2238380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 11:31:39.984352 2238380 command_runner.go:130] > 3ec20f2e
	I0911 11:31:39.984429 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:31:39.995346 2238380 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:31:40.000133 2238380 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:31:40.000249 2238380 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:31:40.000419 2238380 ssh_runner.go:195] Run: crio config
	I0911 11:31:40.048897 2238380 command_runner.go:130] ! time="2023-09-11 11:31:40.042703943Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0911 11:31:40.048933 2238380 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0911 11:31:40.062324 2238380 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0911 11:31:40.062351 2238380 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0911 11:31:40.062358 2238380 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0911 11:31:40.062361 2238380 command_runner.go:130] > #
	I0911 11:31:40.062368 2238380 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0911 11:31:40.062374 2238380 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0911 11:31:40.062379 2238380 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0911 11:31:40.062386 2238380 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0911 11:31:40.062390 2238380 command_runner.go:130] > # reload'.
	I0911 11:31:40.062396 2238380 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0911 11:31:40.062402 2238380 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0911 11:31:40.062407 2238380 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0911 11:31:40.062413 2238380 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0911 11:31:40.062420 2238380 command_runner.go:130] > [crio]
	I0911 11:31:40.062430 2238380 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0911 11:31:40.062441 2238380 command_runner.go:130] > # containers images, in this directory.
	I0911 11:31:40.062451 2238380 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0911 11:31:40.062466 2238380 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0911 11:31:40.062478 2238380 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0911 11:31:40.062487 2238380 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0911 11:31:40.062499 2238380 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0911 11:31:40.062510 2238380 command_runner.go:130] > storage_driver = "overlay"
	I0911 11:31:40.062521 2238380 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0911 11:31:40.062535 2238380 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0911 11:31:40.062546 2238380 command_runner.go:130] > storage_option = [
	I0911 11:31:40.062555 2238380 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0911 11:31:40.062564 2238380 command_runner.go:130] > ]
	I0911 11:31:40.062576 2238380 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0911 11:31:40.062590 2238380 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0911 11:31:40.062601 2238380 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0911 11:31:40.062612 2238380 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0911 11:31:40.062626 2238380 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0911 11:31:40.062637 2238380 command_runner.go:130] > # always happen on a node reboot
	I0911 11:31:40.062650 2238380 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0911 11:31:40.062662 2238380 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0911 11:31:40.062673 2238380 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0911 11:31:40.062691 2238380 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0911 11:31:40.062703 2238380 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0911 11:31:40.062720 2238380 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0911 11:31:40.062737 2238380 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0911 11:31:40.062747 2238380 command_runner.go:130] > # internal_wipe = true
	I0911 11:31:40.062754 2238380 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0911 11:31:40.062764 2238380 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0911 11:31:40.062773 2238380 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0911 11:31:40.062786 2238380 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0911 11:31:40.062796 2238380 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0911 11:31:40.062806 2238380 command_runner.go:130] > [crio.api]
	I0911 11:31:40.062816 2238380 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0911 11:31:40.062827 2238380 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0911 11:31:40.062844 2238380 command_runner.go:130] > # IP address on which the stream server will listen.
	I0911 11:31:40.062854 2238380 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0911 11:31:40.062867 2238380 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0911 11:31:40.062879 2238380 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0911 11:31:40.062889 2238380 command_runner.go:130] > # stream_port = "0"
	I0911 11:31:40.062903 2238380 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0911 11:31:40.062914 2238380 command_runner.go:130] > # stream_enable_tls = false
	I0911 11:31:40.062926 2238380 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0911 11:31:40.062936 2238380 command_runner.go:130] > # stream_idle_timeout = ""
	I0911 11:31:40.062951 2238380 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0911 11:31:40.062965 2238380 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0911 11:31:40.062974 2238380 command_runner.go:130] > # minutes.
	I0911 11:31:40.062983 2238380 command_runner.go:130] > # stream_tls_cert = ""
	I0911 11:31:40.062997 2238380 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0911 11:31:40.063011 2238380 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0911 11:31:40.063022 2238380 command_runner.go:130] > # stream_tls_key = ""
	I0911 11:31:40.063037 2238380 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0911 11:31:40.063051 2238380 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0911 11:31:40.063063 2238380 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0911 11:31:40.063074 2238380 command_runner.go:130] > # stream_tls_ca = ""
	I0911 11:31:40.063090 2238380 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:31:40.063102 2238380 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0911 11:31:40.063117 2238380 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:31:40.063128 2238380 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0911 11:31:40.063156 2238380 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0911 11:31:40.063172 2238380 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0911 11:31:40.063178 2238380 command_runner.go:130] > [crio.runtime]
	I0911 11:31:40.063192 2238380 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0911 11:31:40.063205 2238380 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0911 11:31:40.063216 2238380 command_runner.go:130] > # "nofile=1024:2048"
	I0911 11:31:40.063229 2238380 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0911 11:31:40.063242 2238380 command_runner.go:130] > # default_ulimits = [
	I0911 11:31:40.063251 2238380 command_runner.go:130] > # ]
	I0911 11:31:40.063263 2238380 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0911 11:31:40.063273 2238380 command_runner.go:130] > # no_pivot = false
	I0911 11:31:40.063284 2238380 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0911 11:31:40.063301 2238380 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0911 11:31:40.063312 2238380 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0911 11:31:40.063323 2238380 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0911 11:31:40.063335 2238380 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0911 11:31:40.063350 2238380 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:31:40.063361 2238380 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0911 11:31:40.063372 2238380 command_runner.go:130] > # Cgroup setting for conmon
	I0911 11:31:40.063385 2238380 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0911 11:31:40.063394 2238380 command_runner.go:130] > conmon_cgroup = "pod"
	I0911 11:31:40.063406 2238380 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0911 11:31:40.063433 2238380 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0911 11:31:40.063449 2238380 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:31:40.063459 2238380 command_runner.go:130] > conmon_env = [
	I0911 11:31:40.063473 2238380 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0911 11:31:40.063481 2238380 command_runner.go:130] > ]
	I0911 11:31:40.063491 2238380 command_runner.go:130] > # Additional environment variables to set for all the
	I0911 11:31:40.063504 2238380 command_runner.go:130] > # containers. These are overridden if set in the
	I0911 11:31:40.063517 2238380 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0911 11:31:40.063528 2238380 command_runner.go:130] > # default_env = [
	I0911 11:31:40.063538 2238380 command_runner.go:130] > # ]
	I0911 11:31:40.063552 2238380 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0911 11:31:40.063562 2238380 command_runner.go:130] > # selinux = false
	I0911 11:31:40.063576 2238380 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0911 11:31:40.063591 2238380 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0911 11:31:40.063604 2238380 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0911 11:31:40.063617 2238380 command_runner.go:130] > # seccomp_profile = ""
	I0911 11:31:40.063630 2238380 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0911 11:31:40.063644 2238380 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0911 11:31:40.063658 2238380 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0911 11:31:40.063669 2238380 command_runner.go:130] > # which might increase security.
	I0911 11:31:40.063678 2238380 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0911 11:31:40.063692 2238380 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0911 11:31:40.063706 2238380 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0911 11:31:40.063720 2238380 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0911 11:31:40.063735 2238380 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0911 11:31:40.063747 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:31:40.063762 2238380 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0911 11:31:40.063776 2238380 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0911 11:31:40.063787 2238380 command_runner.go:130] > # the cgroup blockio controller.
	I0911 11:31:40.063795 2238380 command_runner.go:130] > # blockio_config_file = ""
	I0911 11:31:40.063810 2238380 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0911 11:31:40.063823 2238380 command_runner.go:130] > # irqbalance daemon.
	I0911 11:31:40.063836 2238380 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0911 11:31:40.063850 2238380 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0911 11:31:40.063862 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:31:40.063870 2238380 command_runner.go:130] > # rdt_config_file = ""
	I0911 11:31:40.063883 2238380 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0911 11:31:40.063894 2238380 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0911 11:31:40.063908 2238380 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0911 11:31:40.063918 2238380 command_runner.go:130] > # separate_pull_cgroup = ""
	I0911 11:31:40.063933 2238380 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0911 11:31:40.063947 2238380 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0911 11:31:40.063957 2238380 command_runner.go:130] > # will be added.
	I0911 11:31:40.063966 2238380 command_runner.go:130] > # default_capabilities = [
	I0911 11:31:40.063976 2238380 command_runner.go:130] > # 	"CHOWN",
	I0911 11:31:40.063986 2238380 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0911 11:31:40.063994 2238380 command_runner.go:130] > # 	"FSETID",
	I0911 11:31:40.064003 2238380 command_runner.go:130] > # 	"FOWNER",
	I0911 11:31:40.064011 2238380 command_runner.go:130] > # 	"SETGID",
	I0911 11:31:40.064020 2238380 command_runner.go:130] > # 	"SETUID",
	I0911 11:31:40.064027 2238380 command_runner.go:130] > # 	"SETPCAP",
	I0911 11:31:40.064034 2238380 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0911 11:31:40.064042 2238380 command_runner.go:130] > # 	"KILL",
	I0911 11:31:40.064048 2238380 command_runner.go:130] > # ]
	I0911 11:31:40.064063 2238380 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0911 11:31:40.064076 2238380 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:31:40.064086 2238380 command_runner.go:130] > # default_sysctls = [
	I0911 11:31:40.064095 2238380 command_runner.go:130] > # ]
	I0911 11:31:40.064104 2238380 command_runner.go:130] > # List of devices on the host that a
	I0911 11:31:40.064118 2238380 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0911 11:31:40.064128 2238380 command_runner.go:130] > # allowed_devices = [
	I0911 11:31:40.064136 2238380 command_runner.go:130] > # 	"/dev/fuse",
	I0911 11:31:40.064146 2238380 command_runner.go:130] > # ]
	I0911 11:31:40.064156 2238380 command_runner.go:130] > # List of additional devices. specified as
	I0911 11:31:40.064172 2238380 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0911 11:31:40.064185 2238380 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0911 11:31:40.064216 2238380 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:31:40.064227 2238380 command_runner.go:130] > # additional_devices = [
	I0911 11:31:40.064241 2238380 command_runner.go:130] > # ]
	I0911 11:31:40.064253 2238380 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0911 11:31:40.064264 2238380 command_runner.go:130] > # cdi_spec_dirs = [
	I0911 11:31:40.064273 2238380 command_runner.go:130] > # 	"/etc/cdi",
	I0911 11:31:40.064281 2238380 command_runner.go:130] > # 	"/var/run/cdi",
	I0911 11:31:40.064287 2238380 command_runner.go:130] > # ]
	I0911 11:31:40.064302 2238380 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0911 11:31:40.064316 2238380 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0911 11:31:40.064326 2238380 command_runner.go:130] > # Defaults to false.
	I0911 11:31:40.064338 2238380 command_runner.go:130] > # device_ownership_from_security_context = false
	I0911 11:31:40.064352 2238380 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0911 11:31:40.064367 2238380 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0911 11:31:40.064374 2238380 command_runner.go:130] > # hooks_dir = [
	I0911 11:31:40.064382 2238380 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0911 11:31:40.064388 2238380 command_runner.go:130] > # ]
	I0911 11:31:40.064398 2238380 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0911 11:31:40.064410 2238380 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0911 11:31:40.064428 2238380 command_runner.go:130] > # its default mounts from the following two files:
	I0911 11:31:40.064431 2238380 command_runner.go:130] > #
	I0911 11:31:40.064437 2238380 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0911 11:31:40.064447 2238380 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0911 11:31:40.064454 2238380 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0911 11:31:40.064458 2238380 command_runner.go:130] > #
	I0911 11:31:40.064469 2238380 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0911 11:31:40.064482 2238380 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0911 11:31:40.064497 2238380 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0911 11:31:40.064509 2238380 command_runner.go:130] > #      only add mounts it finds in this file.
	I0911 11:31:40.064517 2238380 command_runner.go:130] > #
	I0911 11:31:40.064528 2238380 command_runner.go:130] > # default_mounts_file = ""
	I0911 11:31:40.064540 2238380 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0911 11:31:40.064553 2238380 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0911 11:31:40.064560 2238380 command_runner.go:130] > pids_limit = 1024
	I0911 11:31:40.064566 2238380 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0911 11:31:40.064579 2238380 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0911 11:31:40.064593 2238380 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0911 11:31:40.064609 2238380 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0911 11:31:40.064620 2238380 command_runner.go:130] > # log_size_max = -1
	I0911 11:31:40.064634 2238380 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0911 11:31:40.064645 2238380 command_runner.go:130] > # log_to_journald = false
	I0911 11:31:40.064656 2238380 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0911 11:31:40.064665 2238380 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0911 11:31:40.064677 2238380 command_runner.go:130] > # Path to directory for container attach sockets.
	I0911 11:31:40.064689 2238380 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0911 11:31:40.064702 2238380 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0911 11:31:40.064712 2238380 command_runner.go:130] > # bind_mount_prefix = ""
	I0911 11:31:40.064725 2238380 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0911 11:31:40.064735 2238380 command_runner.go:130] > # read_only = false
	I0911 11:31:40.064747 2238380 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0911 11:31:40.064759 2238380 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0911 11:31:40.064768 2238380 command_runner.go:130] > # live configuration reload.
	I0911 11:31:40.064778 2238380 command_runner.go:130] > # log_level = "info"
	I0911 11:31:40.064788 2238380 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0911 11:31:40.064799 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:31:40.064806 2238380 command_runner.go:130] > # log_filter = ""
	I0911 11:31:40.064832 2238380 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0911 11:31:40.064847 2238380 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0911 11:31:40.064854 2238380 command_runner.go:130] > # separated by comma.
	I0911 11:31:40.064861 2238380 command_runner.go:130] > # uid_mappings = ""
	I0911 11:31:40.064873 2238380 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0911 11:31:40.064885 2238380 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0911 11:31:40.064894 2238380 command_runner.go:130] > # separated by comma.
	I0911 11:31:40.064901 2238380 command_runner.go:130] > # gid_mappings = ""
	I0911 11:31:40.064912 2238380 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0911 11:31:40.064925 2238380 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:31:40.064937 2238380 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:31:40.064948 2238380 command_runner.go:130] > # minimum_mappable_uid = -1
	I0911 11:31:40.064960 2238380 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0911 11:31:40.064973 2238380 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:31:40.064987 2238380 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:31:40.064996 2238380 command_runner.go:130] > # minimum_mappable_gid = -1
	I0911 11:31:40.065005 2238380 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0911 11:31:40.065014 2238380 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0911 11:31:40.065021 2238380 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0911 11:31:40.065030 2238380 command_runner.go:130] > # ctr_stop_timeout = 30
	I0911 11:31:40.065038 2238380 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0911 11:31:40.065050 2238380 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0911 11:31:40.065061 2238380 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0911 11:31:40.065068 2238380 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0911 11:31:40.065083 2238380 command_runner.go:130] > drop_infra_ctr = false
	I0911 11:31:40.065096 2238380 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0911 11:31:40.065109 2238380 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0911 11:31:40.065124 2238380 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0911 11:31:40.065134 2238380 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0911 11:31:40.065147 2238380 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0911 11:31:40.065157 2238380 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0911 11:31:40.065165 2238380 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0911 11:31:40.065179 2238380 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0911 11:31:40.065189 2238380 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0911 11:31:40.065203 2238380 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0911 11:31:40.065216 2238380 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0911 11:31:40.065228 2238380 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0911 11:31:40.065239 2238380 command_runner.go:130] > # default_runtime = "runc"
	I0911 11:31:40.065246 2238380 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0911 11:31:40.065253 2238380 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0911 11:31:40.065266 2238380 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0911 11:31:40.065273 2238380 command_runner.go:130] > # creation as a file is not desired either.
	I0911 11:31:40.065282 2238380 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0911 11:31:40.065289 2238380 command_runner.go:130] > # the hostname is being managed dynamically.
	I0911 11:31:40.065294 2238380 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0911 11:31:40.065298 2238380 command_runner.go:130] > # ]
	I0911 11:31:40.065304 2238380 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0911 11:31:40.065313 2238380 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0911 11:31:40.065332 2238380 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0911 11:31:40.065341 2238380 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0911 11:31:40.065344 2238380 command_runner.go:130] > #
	I0911 11:31:40.065349 2238380 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0911 11:31:40.065356 2238380 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0911 11:31:40.065360 2238380 command_runner.go:130] > #  runtime_type = "oci"
	I0911 11:31:40.065365 2238380 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0911 11:31:40.065370 2238380 command_runner.go:130] > #  privileged_without_host_devices = false
	I0911 11:31:40.065377 2238380 command_runner.go:130] > #  allowed_annotations = []
	I0911 11:31:40.065380 2238380 command_runner.go:130] > # Where:
	I0911 11:31:40.065386 2238380 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0911 11:31:40.065394 2238380 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0911 11:31:40.065401 2238380 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0911 11:31:40.065409 2238380 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0911 11:31:40.065413 2238380 command_runner.go:130] > #   in $PATH.
	I0911 11:31:40.065420 2238380 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0911 11:31:40.065428 2238380 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0911 11:31:40.065434 2238380 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0911 11:31:40.065438 2238380 command_runner.go:130] > #   state.
	I0911 11:31:40.065444 2238380 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0911 11:31:40.065450 2238380 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0911 11:31:40.065458 2238380 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0911 11:31:40.065464 2238380 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0911 11:31:40.065472 2238380 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0911 11:31:40.065479 2238380 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0911 11:31:40.065485 2238380 command_runner.go:130] > #   The currently recognized values are:
	I0911 11:31:40.065492 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0911 11:31:40.065501 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0911 11:31:40.065507 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0911 11:31:40.065516 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0911 11:31:40.065526 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0911 11:31:40.065532 2238380 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0911 11:31:40.065541 2238380 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0911 11:31:40.065547 2238380 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0911 11:31:40.065552 2238380 command_runner.go:130] > #   should be moved to the container's cgroup
	I0911 11:31:40.065558 2238380 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0911 11:31:40.065563 2238380 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0911 11:31:40.065570 2238380 command_runner.go:130] > runtime_type = "oci"
	I0911 11:31:40.065574 2238380 command_runner.go:130] > runtime_root = "/run/runc"
	I0911 11:31:40.065578 2238380 command_runner.go:130] > runtime_config_path = ""
	I0911 11:31:40.065585 2238380 command_runner.go:130] > monitor_path = ""
	I0911 11:31:40.065588 2238380 command_runner.go:130] > monitor_cgroup = ""
	I0911 11:31:40.065592 2238380 command_runner.go:130] > monitor_exec_cgroup = ""
	I0911 11:31:40.065598 2238380 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0911 11:31:40.065604 2238380 command_runner.go:130] > # running containers
	I0911 11:31:40.065608 2238380 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0911 11:31:40.065615 2238380 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0911 11:31:40.065645 2238380 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0911 11:31:40.065654 2238380 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0911 11:31:40.065658 2238380 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0911 11:31:40.065663 2238380 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0911 11:31:40.065670 2238380 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0911 11:31:40.065675 2238380 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0911 11:31:40.065682 2238380 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0911 11:31:40.065686 2238380 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0911 11:31:40.065695 2238380 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0911 11:31:40.065700 2238380 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0911 11:31:40.065709 2238380 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0911 11:31:40.065716 2238380 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0911 11:31:40.065725 2238380 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0911 11:31:40.065733 2238380 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0911 11:31:40.065742 2238380 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0911 11:31:40.065752 2238380 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0911 11:31:40.065758 2238380 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0911 11:31:40.065767 2238380 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0911 11:31:40.065771 2238380 command_runner.go:130] > # Example:
	I0911 11:31:40.065775 2238380 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0911 11:31:40.065783 2238380 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0911 11:31:40.065788 2238380 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0911 11:31:40.065795 2238380 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0911 11:31:40.065799 2238380 command_runner.go:130] > # cpuset = 0
	I0911 11:31:40.065804 2238380 command_runner.go:130] > # cpushares = "0-1"
	I0911 11:31:40.065809 2238380 command_runner.go:130] > # Where:
	I0911 11:31:40.065816 2238380 command_runner.go:130] > # The workload name is workload-type.
	I0911 11:31:40.065822 2238380 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0911 11:31:40.065829 2238380 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0911 11:31:40.065835 2238380 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0911 11:31:40.065842 2238380 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0911 11:31:40.065850 2238380 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0911 11:31:40.065854 2238380 command_runner.go:130] > # 
	I0911 11:31:40.065860 2238380 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0911 11:31:40.065866 2238380 command_runner.go:130] > #
	I0911 11:31:40.065871 2238380 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0911 11:31:40.065879 2238380 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0911 11:31:40.065885 2238380 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0911 11:31:40.065894 2238380 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0911 11:31:40.065899 2238380 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0911 11:31:40.065905 2238380 command_runner.go:130] > [crio.image]
	I0911 11:31:40.065910 2238380 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0911 11:31:40.065917 2238380 command_runner.go:130] > # default_transport = "docker://"
	I0911 11:31:40.065923 2238380 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0911 11:31:40.065929 2238380 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:31:40.065933 2238380 command_runner.go:130] > # global_auth_file = ""
	I0911 11:31:40.065941 2238380 command_runner.go:130] > # The image used to instantiate infra containers.
	I0911 11:31:40.065946 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:31:40.065950 2238380 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0911 11:31:40.065959 2238380 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0911 11:31:40.065964 2238380 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:31:40.065971 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:31:40.065976 2238380 command_runner.go:130] > # pause_image_auth_file = ""
	I0911 11:31:40.065984 2238380 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0911 11:31:40.065992 2238380 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0911 11:31:40.065998 2238380 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0911 11:31:40.066006 2238380 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0911 11:31:40.066011 2238380 command_runner.go:130] > # pause_command = "/pause"
	I0911 11:31:40.066017 2238380 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0911 11:31:40.066025 2238380 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0911 11:31:40.066034 2238380 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0911 11:31:40.066044 2238380 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0911 11:31:40.066054 2238380 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0911 11:31:40.066060 2238380 command_runner.go:130] > # signature_policy = ""
	I0911 11:31:40.066066 2238380 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0911 11:31:40.066074 2238380 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0911 11:31:40.066080 2238380 command_runner.go:130] > # changing them here.
	I0911 11:31:40.066085 2238380 command_runner.go:130] > # insecure_registries = [
	I0911 11:31:40.066090 2238380 command_runner.go:130] > # ]
	I0911 11:31:40.066097 2238380 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0911 11:31:40.066105 2238380 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0911 11:31:40.066111 2238380 command_runner.go:130] > # image_volumes = "mkdir"
	I0911 11:31:40.066117 2238380 command_runner.go:130] > # Temporary directory to use for storing big files
	I0911 11:31:40.066123 2238380 command_runner.go:130] > # big_files_temporary_dir = ""
	I0911 11:31:40.066129 2238380 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0911 11:31:40.066134 2238380 command_runner.go:130] > # CNI plugins.
	I0911 11:31:40.066138 2238380 command_runner.go:130] > [crio.network]
	I0911 11:31:40.066146 2238380 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0911 11:31:40.066154 2238380 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0911 11:31:40.066158 2238380 command_runner.go:130] > # cni_default_network = ""
	I0911 11:31:40.066167 2238380 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0911 11:31:40.066174 2238380 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0911 11:31:40.066179 2238380 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0911 11:31:40.066185 2238380 command_runner.go:130] > # plugin_dirs = [
	I0911 11:31:40.066190 2238380 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0911 11:31:40.066195 2238380 command_runner.go:130] > # ]
	I0911 11:31:40.066201 2238380 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0911 11:31:40.066207 2238380 command_runner.go:130] > [crio.metrics]
	I0911 11:31:40.066213 2238380 command_runner.go:130] > # Globally enable or disable metrics support.
	I0911 11:31:40.066219 2238380 command_runner.go:130] > enable_metrics = true
	I0911 11:31:40.066224 2238380 command_runner.go:130] > # Specify enabled metrics collectors.
	I0911 11:31:40.066230 2238380 command_runner.go:130] > # Per default all metrics are enabled.
	I0911 11:31:40.066241 2238380 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0911 11:31:40.066249 2238380 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0911 11:31:40.066257 2238380 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0911 11:31:40.066262 2238380 command_runner.go:130] > # metrics_collectors = [
	I0911 11:31:40.066266 2238380 command_runner.go:130] > # 	"operations",
	I0911 11:31:40.066274 2238380 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0911 11:31:40.066281 2238380 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0911 11:31:40.066285 2238380 command_runner.go:130] > # 	"operations_errors",
	I0911 11:31:40.066291 2238380 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0911 11:31:40.066295 2238380 command_runner.go:130] > # 	"image_pulls_by_name",
	I0911 11:31:40.066301 2238380 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0911 11:31:40.066305 2238380 command_runner.go:130] > # 	"image_pulls_failures",
	I0911 11:31:40.066310 2238380 command_runner.go:130] > # 	"image_pulls_successes",
	I0911 11:31:40.066314 2238380 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0911 11:31:40.066321 2238380 command_runner.go:130] > # 	"image_layer_reuse",
	I0911 11:31:40.066324 2238380 command_runner.go:130] > # 	"containers_oom_total",
	I0911 11:31:40.066328 2238380 command_runner.go:130] > # 	"containers_oom",
	I0911 11:31:40.066333 2238380 command_runner.go:130] > # 	"processes_defunct",
	I0911 11:31:40.066337 2238380 command_runner.go:130] > # 	"operations_total",
	I0911 11:31:40.066344 2238380 command_runner.go:130] > # 	"operations_latency_seconds",
	I0911 11:31:40.066348 2238380 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0911 11:31:40.066354 2238380 command_runner.go:130] > # 	"operations_errors_total",
	I0911 11:31:40.066359 2238380 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0911 11:31:40.066366 2238380 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0911 11:31:40.066370 2238380 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0911 11:31:40.066377 2238380 command_runner.go:130] > # 	"image_pulls_success_total",
	I0911 11:31:40.066381 2238380 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0911 11:31:40.066387 2238380 command_runner.go:130] > # 	"containers_oom_count_total",
	I0911 11:31:40.066391 2238380 command_runner.go:130] > # ]
	I0911 11:31:40.066396 2238380 command_runner.go:130] > # The port on which the metrics server will listen.
	I0911 11:31:40.066402 2238380 command_runner.go:130] > # metrics_port = 9090
	I0911 11:31:40.066407 2238380 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0911 11:31:40.066411 2238380 command_runner.go:130] > # metrics_socket = ""
	I0911 11:31:40.066416 2238380 command_runner.go:130] > # The certificate for the secure metrics server.
	I0911 11:31:40.066423 2238380 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0911 11:31:40.066429 2238380 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0911 11:31:40.066437 2238380 command_runner.go:130] > # certificate on any modification event.
	I0911 11:31:40.066441 2238380 command_runner.go:130] > # metrics_cert = ""
	I0911 11:31:40.066448 2238380 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0911 11:31:40.066453 2238380 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0911 11:31:40.066459 2238380 command_runner.go:130] > # metrics_key = ""
	I0911 11:31:40.066466 2238380 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0911 11:31:40.066472 2238380 command_runner.go:130] > [crio.tracing]
	I0911 11:31:40.066477 2238380 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0911 11:31:40.066483 2238380 command_runner.go:130] > # enable_tracing = false
	I0911 11:31:40.066489 2238380 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0911 11:31:40.066496 2238380 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0911 11:31:40.066501 2238380 command_runner.go:130] > # Number of samples to collect per million spans.
	I0911 11:31:40.066506 2238380 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0911 11:31:40.066512 2238380 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0911 11:31:40.066516 2238380 command_runner.go:130] > [crio.stats]
	I0911 11:31:40.066521 2238380 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0911 11:31:40.066529 2238380 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0911 11:31:40.066533 2238380 command_runner.go:130] > # stats_collection_period = 0
	I0911 11:31:40.066596 2238380 cni.go:84] Creating CNI manager for ""
	I0911 11:31:40.066605 2238380 cni.go:136] 3 nodes found, recommending kindnet
	I0911 11:31:40.066615 2238380 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:31:40.066634 2238380 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.220 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-378707 NodeName:multinode-378707-m02 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.220 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:31:40.066830 2238380 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.220
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-378707-m02"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.220
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:31:40.066888 2238380 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-378707-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.220
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:31:40.066944 2238380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:31:40.077262 2238380 command_runner.go:130] > kubeadm
	I0911 11:31:40.077288 2238380 command_runner.go:130] > kubectl
	I0911 11:31:40.077295 2238380 command_runner.go:130] > kubelet
	I0911 11:31:40.077332 2238380 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:31:40.077394 2238380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0911 11:31:40.086748 2238380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0911 11:31:40.104201 2238380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:31:40.120807 2238380 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I0911 11:31:40.124686 2238380 command_runner.go:130] > 192.168.39.237	control-plane.minikube.internal
	I0911 11:31:40.124958 2238380 host.go:66] Checking if "multinode-378707" exists ...
	I0911 11:31:40.125210 2238380 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:31:40.125368 2238380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:31:40.125401 2238380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:31:40.141049 2238380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40357
	I0911 11:31:40.141576 2238380 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:31:40.142244 2238380 main.go:141] libmachine: Using API Version  1
	I0911 11:31:40.142264 2238380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:31:40.142644 2238380 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:31:40.142918 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:31:40.143089 2238380 start.go:301] JoinCluster: &{Name:multinode-378707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientP
ath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:31:40.143214 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0911 11:31:40.143230 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:31:40.145825 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:31:40.146219 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:31:40.146240 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:31:40.146449 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:31:40.146660 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:31:40.146818 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:31:40.146944 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:31:40.352936 2238380 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token d1extq.frmafvg0n7i28u41 --discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 11:31:40.352994 2238380 start.go:314] removing existing worker node "m02" before attempting to rejoin cluster: &{Name:m02 IP:192.168.39.220 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0911 11:31:40.353038 2238380 host.go:66] Checking if "multinode-378707" exists ...
	I0911 11:31:40.353353 2238380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:31:40.353382 2238380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:31:40.369665 2238380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
	I0911 11:31:40.370098 2238380 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:31:40.370657 2238380 main.go:141] libmachine: Using API Version  1
	I0911 11:31:40.370681 2238380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:31:40.371041 2238380 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:31:40.371315 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:31:40.371525 2238380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-378707-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0911 11:31:40.371554 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:31:40.375172 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:31:40.375693 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:31:40.375724 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:31:40.375912 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:31:40.376151 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:31:40.376319 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:31:40.376477 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:31:40.603015 2238380 command_runner.go:130] > node/multinode-378707-m02 cordoned
	I0911 11:31:43.647739 2238380 command_runner.go:130] > pod "busybox-5bc68d56bd-f9d7x" has DeletionTimestamp older than 1 seconds, skipping
	I0911 11:31:43.647798 2238380 command_runner.go:130] > node/multinode-378707-m02 drained
	I0911 11:31:43.650119 2238380 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0911 11:31:43.650150 2238380 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-p8h9v, kube-system/kube-proxy-8gcxx
	I0911 11:31:43.650193 2238380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-378707-m02 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.278638359s)
	I0911 11:31:43.650217 2238380 node.go:108] successfully drained node "m02"
	I0911 11:31:43.650788 2238380 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:31:43.651034 2238380 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:31:43.651548 2238380 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0911 11:31:43.651625 2238380 round_trippers.go:463] DELETE https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:31:43.651632 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:43.651642 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:43.651662 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:43.651670 2238380 round_trippers.go:473]     Content-Type: application/json
	I0911 11:31:43.665798 2238380 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0911 11:31:43.665828 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:43.665840 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:43.665848 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:43.665857 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:43.665866 2238380 round_trippers.go:580]     Content-Length: 171
	I0911 11:31:43.665879 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:43 GMT
	I0911 11:31:43.665890 2238380 round_trippers.go:580]     Audit-Id: f2a0dcbb-28f7-4b42-a905-da1541a7c0f8
	I0911 11:31:43.665896 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:43.665922 2238380 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-378707-m02","kind":"nodes","uid":"53376bc7-b94e-4f8b-bce2-026875c17588"}}
	I0911 11:31:43.665963 2238380 node.go:124] successfully deleted node "m02"
	I0911 11:31:43.665976 2238380 start.go:318] successfully removed existing worker node "m02" from cluster: &{Name:m02 IP:192.168.39.220 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0911 11:31:43.666005 2238380 start.go:322] trying to join worker node "m02" to cluster: &{Name:m02 IP:192.168.39.220 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0911 11:31:43.666030 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d1extq.frmafvg0n7i28u41 --discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-378707-m02"
	I0911 11:31:43.756636 2238380 command_runner.go:130] > [preflight] Running pre-flight checks
	I0911 11:31:43.942178 2238380 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0911 11:31:43.942219 2238380 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0911 11:31:44.006542 2238380 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:31:44.006675 2238380 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:31:44.006701 2238380 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0911 11:31:44.177454 2238380 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0911 11:31:44.705651 2238380 command_runner.go:130] > This node has joined the cluster:
	I0911 11:31:44.705683 2238380 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0911 11:31:44.705693 2238380 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0911 11:31:44.705703 2238380 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0911 11:31:44.708349 2238380 command_runner.go:130] ! W0911 11:31:43.749480    2749 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0911 11:31:44.708383 2238380 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0911 11:31:44.708395 2238380 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0911 11:31:44.708408 2238380 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0911 11:31:44.708431 2238380 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token d1extq.frmafvg0n7i28u41 --discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-378707-m02": (1.042383278s)
	I0911 11:31:44.708458 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0911 11:31:45.036089 2238380 start.go:303] JoinCluster complete in 4.892989849s
	I0911 11:31:45.036133 2238380 cni.go:84] Creating CNI manager for ""
	I0911 11:31:45.036154 2238380 cni.go:136] 3 nodes found, recommending kindnet
	I0911 11:31:45.036225 2238380 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0911 11:31:45.042801 2238380 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0911 11:31:45.042835 2238380 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0911 11:31:45.042845 2238380 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0911 11:31:45.042855 2238380 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:31:45.042863 2238380 command_runner.go:130] > Access: 2023-09-11 11:29:14.744855626 +0000
	I0911 11:31:45.042871 2238380 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0911 11:31:45.042882 2238380 command_runner.go:130] > Change: 2023-09-11 11:29:12.801855626 +0000
	I0911 11:31:45.042888 2238380 command_runner.go:130] >  Birth: -
	I0911 11:31:45.042953 2238380 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0911 11:31:45.042970 2238380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0911 11:31:45.061932 2238380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0911 11:31:45.418304 2238380 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0911 11:31:45.426706 2238380 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0911 11:31:45.430297 2238380 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0911 11:31:45.441422 2238380 command_runner.go:130] > daemonset.apps/kindnet configured
	I0911 11:31:45.444933 2238380 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:31:45.445224 2238380 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:31:45.445584 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0911 11:31:45.445597 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.445608 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.445617 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.454313 2238380 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0911 11:31:45.454341 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.454348 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.454354 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.454359 2238380 round_trippers.go:580]     Content-Length: 291
	I0911 11:31:45.454365 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.454370 2238380 round_trippers.go:580]     Audit-Id: 99925a9c-cc75-499c-afac-c9054266fc17
	I0911 11:31:45.454375 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.454380 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.454404 2238380 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"64cee900-5094-4f8d-89de-d10f65816cce","resourceVersion":"897","creationTimestamp":"2023-09-11T11:19:21Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0911 11:31:45.454519 2238380 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-378707" context rescaled to 1 replicas
	I0911 11:31:45.454550 2238380 start.go:223] Will wait 6m0s for node &{Name:m02 IP:192.168.39.220 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true}
	I0911 11:31:45.456580 2238380 out.go:177] * Verifying Kubernetes components...
	I0911 11:31:45.458116 2238380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:31:45.472658 2238380 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:31:45.472989 2238380 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:31:45.473255 2238380 node_ready.go:35] waiting up to 6m0s for node "multinode-378707-m02" to be "Ready" ...
	I0911 11:31:45.473332 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:31:45.473341 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.473348 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.473358 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.476049 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:31:45.476076 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.476087 2238380 round_trippers.go:580]     Audit-Id: 0b94a6d3-4595-4c47-a2cc-9a8acf5123ab
	I0911 11:31:45.476097 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.476106 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.476114 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.476127 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.476139 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.476282 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"45a0aa36-7b9f-42cb-bb77-1d667e90ffbf","resourceVersion":"1055","creationTimestamp":"2023-09-11T11:31:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:31:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:31:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0911 11:31:45.476607 2238380 node_ready.go:49] node "multinode-378707-m02" has status "Ready":"True"
	I0911 11:31:45.476630 2238380 node_ready.go:38] duration metric: took 3.358992ms waiting for node "multinode-378707-m02" to be "Ready" ...
	I0911 11:31:45.476654 2238380 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:31:45.476723 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:31:45.476736 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.476748 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.476761 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.480703 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:31:45.480728 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.480736 2238380 round_trippers.go:580]     Audit-Id: aa1e054a-41de-4d15-911d-61a25bab629c
	I0911 11:31:45.480742 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.480747 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.480753 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.480758 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.480763 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.482090 2238380 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1060"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"893","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 82093 chars]
	I0911 11:31:45.484551 2238380 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:45.484628 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:31:45.484636 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.484643 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.484653 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.487011 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:31:45.487033 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.487041 2238380 round_trippers.go:580]     Audit-Id: efa0cb9c-7b2b-4b4d-b5cb-c5a99926fcaf
	I0911 11:31:45.487047 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.487052 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.487057 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.487063 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.487068 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.487206 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"893","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0911 11:31:45.487625 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:31:45.487637 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.487644 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.487650 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.490288 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:31:45.490312 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.490321 2238380 round_trippers.go:580]     Audit-Id: 58e5987e-b286-4dc8-8574-4b2531397178
	I0911 11:31:45.490331 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.490340 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.490349 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.490360 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.490365 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.491073 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0911 11:31:45.491390 2238380 pod_ready.go:92] pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace has status "Ready":"True"
	I0911 11:31:45.491403 2238380 pod_ready.go:81] duration metric: took 6.831886ms waiting for pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:45.491412 2238380 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:45.491473 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-378707
	I0911 11:31:45.491481 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.491489 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.491495 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.494943 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:31:45.494963 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.494973 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.494981 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.494990 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.494999 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.495006 2238380 round_trippers.go:580]     Audit-Id: 419af948-ffaa-4ea8-aaf9-bebd5d2a9706
	I0911 11:31:45.495012 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.495173 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-378707","namespace":"kube-system","uid":"30882221-42a4-42a4-9911-63a8ff26c903","resourceVersion":"885","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.237:2379","kubernetes.io/config.hash":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.mirror":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.seen":"2023-09-11T11:19:21.954681050Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0911 11:31:45.495549 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:31:45.495560 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.495568 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.495574 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.497780 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:31:45.497817 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.497827 2238380 round_trippers.go:580]     Audit-Id: ea399463-48f1-4b52-a89c-16a591723292
	I0911 11:31:45.497833 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.497838 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.497847 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.497853 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.497861 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.498060 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0911 11:31:45.498365 2238380 pod_ready.go:92] pod "etcd-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:31:45.498379 2238380 pod_ready.go:81] duration metric: took 6.961644ms waiting for pod "etcd-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:45.498396 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:45.498449 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-378707
	I0911 11:31:45.498456 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.498463 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.498472 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.501095 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:31:45.501117 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.501128 2238380 round_trippers.go:580]     Audit-Id: 72aa806f-3226-4c1c-927b-79ee12b8078c
	I0911 11:31:45.501136 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.501145 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.501153 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.501162 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.501173 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.501914 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-378707","namespace":"kube-system","uid":"6cc96039-3a17-4243-93b6-4bf3ed6f69a8","resourceVersion":"861","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.237:8443","kubernetes.io/config.hash":"4ac3958118ce3f6e7dda52fe654787ec","kubernetes.io/config.mirror":"4ac3958118ce3f6e7dda52fe654787ec","kubernetes.io/config.seen":"2023-09-11T11:19:21.954683933Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0911 11:31:45.502416 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:31:45.502430 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.502437 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.502444 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.504538 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:31:45.504554 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.504560 2238380 round_trippers.go:580]     Audit-Id: 962bc71b-1f6c-499d-9391-55dc47732db7
	I0911 11:31:45.504566 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.504572 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.504580 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.504592 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.504604 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.504731 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0911 11:31:45.505097 2238380 pod_ready.go:92] pod "kube-apiserver-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:31:45.505119 2238380 pod_ready.go:81] duration metric: took 6.713704ms waiting for pod "kube-apiserver-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:45.505128 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:45.505187 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-378707
	I0911 11:31:45.505195 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.505202 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.505209 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.507662 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:31:45.507679 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.507689 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.507698 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.507707 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.507723 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.507733 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.507757 2238380 round_trippers.go:580]     Audit-Id: 500ceb20-244a-4c6e-9784-ed001ba37b40
	I0911 11:31:45.508525 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-378707","namespace":"kube-system","uid":"7bd2ecf1-1558-4680-9075-d30d989a0568","resourceVersion":"859","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee5490370c5fc8b73824fd7337130039","kubernetes.io/config.mirror":"ee5490370c5fc8b73824fd7337130039","kubernetes.io/config.seen":"2023-09-11T11:19:21.954684910Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0911 11:31:45.508930 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:31:45.508945 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.508955 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.508965 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.511117 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:31:45.511133 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.511146 2238380 round_trippers.go:580]     Audit-Id: 51cf62f1-0d39-45b1-bbd0-4c218e2d3012
	I0911 11:31:45.511154 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.511163 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.511172 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.511182 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.511191 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.511336 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0911 11:31:45.511627 2238380 pod_ready.go:92] pod "kube-controller-manager-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:31:45.511641 2238380 pod_ready.go:81] duration metric: took 6.506232ms waiting for pod "kube-controller-manager-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:45.511654 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8gcxx" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:45.674101 2238380 request.go:629] Waited for 162.372872ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gcxx
	I0911 11:31:45.674205 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gcxx
	I0911 11:31:45.674220 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.674249 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.674264 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.678069 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:31:45.678091 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.678099 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.678105 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.678110 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.678116 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.678121 2238380 round_trippers.go:580]     Audit-Id: 006c2e74-f05f-4c99-bacc-2b73b00dd15a
	I0911 11:31:45.678126 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.678312 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8gcxx","generateName":"kube-proxy-","namespace":"kube-system","uid":"f1bb96ad-eb2c-4eeb-b1f8-abb67568b5e7","resourceVersion":"1033","creationTimestamp":"2023-09-11T11:20:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0911 11:31:45.874194 2238380 request.go:629] Waited for 195.366244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:31:45.874270 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:31:45.874275 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:45.874290 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:45.874302 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:45.877884 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:31:45.877906 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:45.877913 2238380 round_trippers.go:580]     Audit-Id: be925c80-84ba-4f43-8de2-6c2d8e0f69c8
	I0911 11:31:45.877919 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:45.877924 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:45.877930 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:45.877941 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:45.877949 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:45 GMT
	I0911 11:31:45.878887 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"45a0aa36-7b9f-42cb-bb77-1d667e90ffbf","resourceVersion":"1055","creationTimestamp":"2023-09-11T11:31:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:31:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:31:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0911 11:31:45.879151 2238380 pod_ready.go:92] pod "kube-proxy-8gcxx" in "kube-system" namespace has status "Ready":"True"
	I0911 11:31:45.879165 2238380 pod_ready.go:81] duration metric: took 367.506091ms waiting for pod "kube-proxy-8gcxx" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:45.879175 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kwvbm" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:46.073563 2238380 request.go:629] Waited for 194.298892ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kwvbm
	I0911 11:31:46.073634 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kwvbm
	I0911 11:31:46.073640 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:46.073651 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:46.073661 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:46.076587 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:31:46.076613 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:46.076623 2238380 round_trippers.go:580]     Audit-Id: 709ea40d-f4ad-46a6-ad7a-3fab2a03e960
	I0911 11:31:46.076632 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:46.076639 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:46.076647 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:46.076654 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:46.076662 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:46 GMT
	I0911 11:31:46.076874 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kwvbm","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a1764e3-ef89-4687-874e-03baf3e90296","resourceVersion":"711","creationTimestamp":"2023-09-11T11:21:07Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:21:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5526 chars]
	I0911 11:31:46.273745 2238380 request.go:629] Waited for 196.383154ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m03
	I0911 11:31:46.273815 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m03
	I0911 11:31:46.273820 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:46.273828 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:46.273834 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:46.276886 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:31:46.276909 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:46.276916 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:46 GMT
	I0911 11:31:46.276922 2238380 round_trippers.go:580]     Audit-Id: 10e90a42-fc78-4898-8227-265433f9fd40
	I0911 11:31:46.276928 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:46.276933 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:46.276938 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:46.276944 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:46.277756 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m03","uid":"b37f601c-a45d-4f04-b0fa-26387559968e","resourceVersion":"736","creationTimestamp":"2023-09-11T11:21:50Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:21:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:volumes.kubernetes.io/controller-managed-attach-detach":{}},"f:labels":{".":{},"f:beta.kubernetes.io/arch":{},"f:beta.kubernetes.io/os":{},"f:kubernetes.io/arch":{},"f:kubernetes.io/hostname":{},"f:kubernetes.io/os":{}}
}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time": [truncated 3413 chars]
	I0911 11:31:46.278127 2238380 pod_ready.go:92] pod "kube-proxy-kwvbm" in "kube-system" namespace has status "Ready":"True"
	I0911 11:31:46.278148 2238380 pod_ready.go:81] duration metric: took 398.96391ms waiting for pod "kube-proxy-kwvbm" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:46.278158 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-snbc8" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:46.473559 2238380 request.go:629] Waited for 195.314789ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-snbc8
	I0911 11:31:46.473656 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-snbc8
	I0911 11:31:46.473663 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:46.473673 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:46.473689 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:46.478303 2238380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:31:46.478335 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:46.478347 2238380 round_trippers.go:580]     Audit-Id: c86b247e-98be-4dd6-a61f-17dfe96555d2
	I0911 11:31:46.478356 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:46.478364 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:46.478372 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:46.478381 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:46.478389 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:46 GMT
	I0911 11:31:46.478532 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-snbc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"c3bb9995-3cd6-4433-a326-3da0a7f4aff3","resourceVersion":"826","creationTimestamp":"2023-09-11T11:19:35Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0911 11:31:46.673448 2238380 request.go:629] Waited for 194.330967ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:31:46.673535 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:31:46.673543 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:46.673556 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:46.673590 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:46.675889 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:31:46.675911 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:46.675918 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:46.675924 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:46.675929 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:46.675935 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:46.675944 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:46 GMT
	I0911 11:31:46.675953 2238380 round_trippers.go:580]     Audit-Id: 686d54bb-0e4c-499b-9199-00e0aabe699b
	I0911 11:31:46.676118 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0911 11:31:46.676474 2238380 pod_ready.go:92] pod "kube-proxy-snbc8" in "kube-system" namespace has status "Ready":"True"
	I0911 11:31:46.676492 2238380 pod_ready.go:81] duration metric: took 398.328821ms waiting for pod "kube-proxy-snbc8" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:46.676505 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:46.874281 2238380 request.go:629] Waited for 197.696526ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-378707
	I0911 11:31:46.874366 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-378707
	I0911 11:31:46.874371 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:46.874379 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:46.874386 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:46.877440 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:31:46.877470 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:46.877481 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:46.877490 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:46.877504 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:46.877513 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:46 GMT
	I0911 11:31:46.877524 2238380 round_trippers.go:580]     Audit-Id: 26d2ac67-250c-4d8a-9933-e44dde2e9044
	I0911 11:31:46.877533 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:46.877748 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-378707","namespace":"kube-system","uid":"51055ddb-deff-4b5d-9a90-0cd2c9dc8aa7","resourceVersion":"867","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"47ac46ded21e848957a0f2d3767001da","kubernetes.io/config.mirror":"47ac46ded21e848957a0f2d3767001da","kubernetes.io/config.seen":"2023-09-11T11:19:21.954685589Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0911 11:31:47.073611 2238380 request.go:629] Waited for 195.325585ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:31:47.073697 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:31:47.073702 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:47.073710 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:47.073717 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:47.076424 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:31:47.076454 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:47.076464 2238380 round_trippers.go:580]     Audit-Id: 6bc4cf80-a7cd-4369-ac17-faca3e755ea4
	I0911 11:31:47.076472 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:47.076479 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:47.076486 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:47.076493 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:47.076501 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:47 GMT
	I0911 11:31:47.077177 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0911 11:31:47.077582 2238380 pod_ready.go:92] pod "kube-scheduler-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:31:47.077601 2238380 pod_ready.go:81] duration metric: took 401.088817ms waiting for pod "kube-scheduler-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:31:47.077614 2238380 pod_ready.go:38] duration metric: took 1.600945238s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:31:47.077629 2238380 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:31:47.077699 2238380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:31:47.091571 2238380 system_svc.go:56] duration metric: took 13.926202ms WaitForService to wait for kubelet.
	I0911 11:31:47.091606 2238380 kubeadm.go:581] duration metric: took 1.637030435s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:31:47.091636 2238380 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:31:47.274087 2238380 request.go:629] Waited for 182.338702ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes
	I0911 11:31:47.274176 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes
	I0911 11:31:47.274183 2238380 round_trippers.go:469] Request Headers:
	I0911 11:31:47.274197 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:31:47.274210 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:31:47.277590 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:31:47.277619 2238380 round_trippers.go:577] Response Headers:
	I0911 11:31:47.277630 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:31:47.277638 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:31:47 GMT
	I0911 11:31:47.277651 2238380 round_trippers.go:580]     Audit-Id: c701990a-fd71-4abd-be50-39edd0474def
	I0911 11:31:47.277659 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:31:47.277667 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:31:47.277676 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:31:47.277983 2238380 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1069"},"items":[{"metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15106 chars]
	I0911 11:31:47.278606 2238380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:31:47.278627 2238380 node_conditions.go:123] node cpu capacity is 2
	I0911 11:31:47.278640 2238380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:31:47.278645 2238380 node_conditions.go:123] node cpu capacity is 2
	I0911 11:31:47.278649 2238380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:31:47.278652 2238380 node_conditions.go:123] node cpu capacity is 2
	I0911 11:31:47.278656 2238380 node_conditions.go:105] duration metric: took 187.015905ms to run NodePressure ...
	I0911 11:31:47.278680 2238380 start.go:228] waiting for startup goroutines ...
	I0911 11:31:47.278708 2238380 start.go:242] writing updated cluster config ...
	I0911 11:31:47.279299 2238380 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:31:47.279442 2238380 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/config.json ...
	I0911 11:31:47.283034 2238380 out.go:177] * Starting worker node multinode-378707-m03 in cluster multinode-378707
	I0911 11:31:47.284436 2238380 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:31:47.284466 2238380 cache.go:57] Caching tarball of preloaded images
	I0911 11:31:47.284582 2238380 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 11:31:47.284598 2238380 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 11:31:47.284760 2238380 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/config.json ...
	I0911 11:31:47.284991 2238380 start.go:365] acquiring machines lock for multinode-378707-m03: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 11:31:47.285047 2238380 start.go:369] acquired machines lock for "multinode-378707-m03" in 30.693µs
	I0911 11:31:47.285069 2238380 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:31:47.285079 2238380 fix.go:54] fixHost starting: m03
	I0911 11:31:47.285408 2238380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:31:47.285435 2238380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:31:47.300982 2238380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34855
	I0911 11:31:47.301511 2238380 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:31:47.302011 2238380 main.go:141] libmachine: Using API Version  1
	I0911 11:31:47.302032 2238380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:31:47.302361 2238380 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:31:47.302568 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .DriverName
	I0911 11:31:47.302784 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetState
	I0911 11:31:47.304465 2238380 fix.go:102] recreateIfNeeded on multinode-378707-m03: state=Running err=<nil>
	W0911 11:31:47.304486 2238380 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:31:47.306507 2238380 out.go:177] * Updating the running kvm2 "multinode-378707-m03" VM ...
	I0911 11:31:47.307850 2238380 machine.go:88] provisioning docker machine ...
	I0911 11:31:47.307883 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .DriverName
	I0911 11:31:47.308133 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetMachineName
	I0911 11:31:47.308325 2238380 buildroot.go:166] provisioning hostname "multinode-378707-m03"
	I0911 11:31:47.308345 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetMachineName
	I0911 11:31:47.308519 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHHostname
	I0911 11:31:47.311041 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:31:47.311688 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:30:bf", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:21:42 +0000 UTC Type:0 Mac:52:54:00:ce:30:bf Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-378707-m03 Clientid:01:52:54:00:ce:30:bf}
	I0911 11:31:47.311722 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:31:47.311998 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHPort
	I0911 11:31:47.312221 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHKeyPath
	I0911 11:31:47.312424 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHKeyPath
	I0911 11:31:47.312607 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHUsername
	I0911 11:31:47.312845 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:31:47.313471 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0911 11:31:47.313498 2238380 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-378707-m03 && echo "multinode-378707-m03" | sudo tee /etc/hostname
	I0911 11:31:47.440354 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-378707-m03
	
	I0911 11:31:47.440397 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHHostname
	I0911 11:31:47.443526 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:31:47.443968 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:30:bf", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:21:42 +0000 UTC Type:0 Mac:52:54:00:ce:30:bf Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-378707-m03 Clientid:01:52:54:00:ce:30:bf}
	I0911 11:31:47.444005 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:31:47.444170 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHPort
	I0911 11:31:47.444393 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHKeyPath
	I0911 11:31:47.444572 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHKeyPath
	I0911 11:31:47.444744 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHUsername
	I0911 11:31:47.444961 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:31:47.445462 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0911 11:31:47.445483 2238380 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-378707-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-378707-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-378707-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:31:47.558279 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:31:47.558317 2238380 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 11:31:47.558348 2238380 buildroot.go:174] setting up certificates
	I0911 11:31:47.558362 2238380 provision.go:83] configureAuth start
	I0911 11:31:47.558381 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetMachineName
	I0911 11:31:47.558698 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetIP
	I0911 11:31:47.561709 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:31:47.562131 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:30:bf", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:21:42 +0000 UTC Type:0 Mac:52:54:00:ce:30:bf Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-378707-m03 Clientid:01:52:54:00:ce:30:bf}
	I0911 11:31:47.562166 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:31:47.562482 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHHostname
	I0911 11:31:47.564907 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:31:47.565356 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:30:bf", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:21:42 +0000 UTC Type:0 Mac:52:54:00:ce:30:bf Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-378707-m03 Clientid:01:52:54:00:ce:30:bf}
	I0911 11:31:47.565390 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:31:47.565556 2238380 provision.go:138] copyHostCerts
	I0911 11:31:47.565592 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:31:47.565632 2238380 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 11:31:47.565645 2238380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:31:47.565752 2238380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 11:31:47.565851 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:31:47.565874 2238380 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 11:31:47.565885 2238380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:31:47.565941 2238380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 11:31:47.566012 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:31:47.566038 2238380 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 11:31:47.566043 2238380 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:31:47.566076 2238380 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 11:31:47.566206 2238380 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.multinode-378707-m03 san=[192.168.39.134 192.168.39.134 localhost 127.0.0.1 minikube multinode-378707-m03]
	I0911 11:31:47.678207 2238380 provision.go:172] copyRemoteCerts
	I0911 11:31:47.678279 2238380 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:31:47.678311 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHHostname
	I0911 11:31:47.681368 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:31:47.681766 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:30:bf", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:21:42 +0000 UTC Type:0 Mac:52:54:00:ce:30:bf Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-378707-m03 Clientid:01:52:54:00:ce:30:bf}
	I0911 11:31:47.681800 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:31:47.682026 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHPort
	I0911 11:31:47.682241 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHKeyPath
	I0911 11:31:47.682412 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHUsername
	I0911 11:31:47.682564 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m03/id_rsa Username:docker}
	I0911 11:31:47.771732 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0911 11:31:47.771837 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:31:47.796367 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0911 11:31:47.796452 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1237 bytes)
	I0911 11:31:47.820076 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0911 11:31:47.820170 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 11:31:47.844190 2238380 provision.go:86] duration metric: configureAuth took 285.809498ms
	I0911 11:31:47.844228 2238380 buildroot.go:189] setting minikube options for container-runtime
	I0911 11:31:47.844461 2238380 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:31:47.844547 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHHostname
	I0911 11:31:47.847663 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:31:47.848129 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:30:bf", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:21:42 +0000 UTC Type:0 Mac:52:54:00:ce:30:bf Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-378707-m03 Clientid:01:52:54:00:ce:30:bf}
	I0911 11:31:47.848157 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:31:47.848426 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHPort
	I0911 11:31:47.848646 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHKeyPath
	I0911 11:31:47.848800 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHKeyPath
	I0911 11:31:47.848939 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHUsername
	I0911 11:31:47.849085 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:31:47.849470 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0911 11:31:47.849486 2238380 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:33:18.585739 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:33:18.585787 2238380 machine.go:91] provisioned docker machine in 1m31.277908993s
	I0911 11:33:18.585805 2238380 start.go:300] post-start starting for "multinode-378707-m03" (driver="kvm2")
	I0911 11:33:18.585820 2238380 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:33:18.585857 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .DriverName
	I0911 11:33:18.586346 2238380 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:33:18.586391 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHHostname
	I0911 11:33:18.589669 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:33:18.590088 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:30:bf", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:21:42 +0000 UTC Type:0 Mac:52:54:00:ce:30:bf Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-378707-m03 Clientid:01:52:54:00:ce:30:bf}
	I0911 11:33:18.590128 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:33:18.590324 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHPort
	I0911 11:33:18.590582 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHKeyPath
	I0911 11:33:18.590777 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHUsername
	I0911 11:33:18.590946 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m03/id_rsa Username:docker}
	I0911 11:33:18.681005 2238380 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:33:18.685752 2238380 command_runner.go:130] > NAME=Buildroot
	I0911 11:33:18.685777 2238380 command_runner.go:130] > VERSION=2021.02.12-1-g88b5c50-dirty
	I0911 11:33:18.685781 2238380 command_runner.go:130] > ID=buildroot
	I0911 11:33:18.685787 2238380 command_runner.go:130] > VERSION_ID=2021.02.12
	I0911 11:33:18.685792 2238380 command_runner.go:130] > PRETTY_NAME="Buildroot 2021.02.12"
	I0911 11:33:18.685846 2238380 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 11:33:18.685864 2238380 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 11:33:18.685946 2238380 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 11:33:18.686039 2238380 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 11:33:18.686053 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> /etc/ssl/certs/22224712.pem
	I0911 11:33:18.686157 2238380 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:33:18.696027 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:33:18.722308 2238380 start.go:303] post-start completed in 136.482549ms
	I0911 11:33:18.722338 2238380 fix.go:56] fixHost completed within 1m31.43726049s
	I0911 11:33:18.722366 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHHostname
	I0911 11:33:18.725336 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:33:18.725781 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:30:bf", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:21:42 +0000 UTC Type:0 Mac:52:54:00:ce:30:bf Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-378707-m03 Clientid:01:52:54:00:ce:30:bf}
	I0911 11:33:18.725826 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:33:18.726009 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHPort
	I0911 11:33:18.726239 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHKeyPath
	I0911 11:33:18.726421 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHKeyPath
	I0911 11:33:18.726577 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHUsername
	I0911 11:33:18.726769 2238380 main.go:141] libmachine: Using SSH client type: native
	I0911 11:33:18.727175 2238380 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.134 22 <nil> <nil>}
	I0911 11:33:18.727187 2238380 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 11:33:18.842040 2238380 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694431998.835952726
	
	I0911 11:33:18.842065 2238380 fix.go:206] guest clock: 1694431998.835952726
	I0911 11:33:18.842073 2238380 fix.go:219] Guest: 2023-09-11 11:33:18.835952726 +0000 UTC Remote: 2023-09-11 11:33:18.722342757 +0000 UTC m=+555.970902516 (delta=113.609969ms)
	I0911 11:33:18.842089 2238380 fix.go:190] guest clock delta is within tolerance: 113.609969ms
	I0911 11:33:18.842095 2238380 start.go:83] releasing machines lock for "multinode-378707-m03", held for 1m31.557034147s
	I0911 11:33:18.842121 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .DriverName
	I0911 11:33:18.842435 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetIP
	I0911 11:33:18.845449 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:33:18.845885 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:30:bf", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:21:42 +0000 UTC Type:0 Mac:52:54:00:ce:30:bf Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-378707-m03 Clientid:01:52:54:00:ce:30:bf}
	I0911 11:33:18.845926 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:33:18.848458 2238380 out.go:177] * Found network options:
	I0911 11:33:18.850307 2238380 out.go:177]   - NO_PROXY=192.168.39.237,192.168.39.220
	W0911 11:33:18.852365 2238380 proxy.go:119] fail to check proxy env: Error ip not in block
	W0911 11:33:18.852392 2238380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0911 11:33:18.852410 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .DriverName
	I0911 11:33:18.853183 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .DriverName
	I0911 11:33:18.853385 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .DriverName
	I0911 11:33:18.853476 2238380 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:33:18.853530 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHHostname
	W0911 11:33:18.853630 2238380 proxy.go:119] fail to check proxy env: Error ip not in block
	W0911 11:33:18.853654 2238380 proxy.go:119] fail to check proxy env: Error ip not in block
	I0911 11:33:18.853762 2238380 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:33:18.853788 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHHostname
	I0911 11:33:18.856465 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:33:18.856741 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:33:18.856801 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:30:bf", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:21:42 +0000 UTC Type:0 Mac:52:54:00:ce:30:bf Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-378707-m03 Clientid:01:52:54:00:ce:30:bf}
	I0911 11:33:18.856844 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:33:18.856975 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHPort
	I0911 11:33:18.857168 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:30:bf", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:21:42 +0000 UTC Type:0 Mac:52:54:00:ce:30:bf Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-378707-m03 Clientid:01:52:54:00:ce:30:bf}
	I0911 11:33:18.857171 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHKeyPath
	I0911 11:33:18.857200 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:33:18.857390 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHPort
	I0911 11:33:18.857403 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHUsername
	I0911 11:33:18.857562 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m03/id_rsa Username:docker}
	I0911 11:33:18.857575 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHKeyPath
	I0911 11:33:18.857744 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetSSHUsername
	I0911 11:33:18.857908 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.134 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m03/id_rsa Username:docker}
	I0911 11:33:19.095907 2238380 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0911 11:33:19.095927 2238380 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0911 11:33:19.102511 2238380 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0911 11:33:19.102707 2238380 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 11:33:19.102803 2238380 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:33:19.113012 2238380 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0911 11:33:19.113039 2238380 start.go:466] detecting cgroup driver to use...
	I0911 11:33:19.113119 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:33:19.131058 2238380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:33:19.146589 2238380 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:33:19.146657 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:33:19.165606 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:33:19.180375 2238380 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:33:19.333617 2238380 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:33:19.482245 2238380 docker.go:212] disabling docker service ...
	I0911 11:33:19.482346 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:33:19.499115 2238380 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:33:19.514218 2238380 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:33:19.659826 2238380 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:33:19.778318 2238380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:33:19.791976 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:33:19.810391 2238380 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0911 11:33:19.810441 2238380 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:33:19.810501 2238380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:33:19.821334 2238380 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:33:19.821410 2238380 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:33:19.832849 2238380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:33:19.844276 2238380 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:33:19.856003 2238380 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:33:19.866804 2238380 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:33:19.876209 2238380 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0911 11:33:19.876307 2238380 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:33:19.886493 2238380 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:33:20.022158 2238380 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:33:20.268339 2238380 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:33:20.268436 2238380 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:33:20.274634 2238380 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0911 11:33:20.274669 2238380 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0911 11:33:20.274679 2238380 command_runner.go:130] > Device: 16h/22d	Inode: 1152        Links: 1
	I0911 11:33:20.274689 2238380 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:33:20.274697 2238380 command_runner.go:130] > Access: 2023-09-11 11:33:20.185170356 +0000
	I0911 11:33:20.274707 2238380 command_runner.go:130] > Modify: 2023-09-11 11:33:20.185170356 +0000
	I0911 11:33:20.274720 2238380 command_runner.go:130] > Change: 2023-09-11 11:33:20.185170356 +0000
	I0911 11:33:20.274730 2238380 command_runner.go:130] >  Birth: -
	I0911 11:33:20.274757 2238380 start.go:534] Will wait 60s for crictl version
	I0911 11:33:20.274823 2238380 ssh_runner.go:195] Run: which crictl
	I0911 11:33:20.278920 2238380 command_runner.go:130] > /usr/bin/crictl
	I0911 11:33:20.279201 2238380 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:33:20.314181 2238380 command_runner.go:130] > Version:  0.1.0
	I0911 11:33:20.314206 2238380 command_runner.go:130] > RuntimeName:  cri-o
	I0911 11:33:20.314211 2238380 command_runner.go:130] > RuntimeVersion:  1.24.1
	I0911 11:33:20.314222 2238380 command_runner.go:130] > RuntimeApiVersion:  v1alpha2
	I0911 11:33:20.315714 2238380 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 11:33:20.315833 2238380 ssh_runner.go:195] Run: crio --version
	I0911 11:33:20.365268 2238380 command_runner.go:130] > crio version 1.24.1
	I0911 11:33:20.365303 2238380 command_runner.go:130] > Version:          1.24.1
	I0911 11:33:20.365319 2238380 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0911 11:33:20.365326 2238380 command_runner.go:130] > GitTreeState:     dirty
	I0911 11:33:20.365336 2238380 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0911 11:33:20.365343 2238380 command_runner.go:130] > GoVersion:        go1.19.9
	I0911 11:33:20.365350 2238380 command_runner.go:130] > Compiler:         gc
	I0911 11:33:20.365357 2238380 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:33:20.365366 2238380 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:33:20.365377 2238380 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:33:20.365384 2238380 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:33:20.365391 2238380 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:33:20.366741 2238380 ssh_runner.go:195] Run: crio --version
	I0911 11:33:20.422083 2238380 command_runner.go:130] > crio version 1.24.1
	I0911 11:33:20.422106 2238380 command_runner.go:130] > Version:          1.24.1
	I0911 11:33:20.422113 2238380 command_runner.go:130] > GitCommit:        a3bbde8a77c323aa6a485da9a9046299155c6016
	I0911 11:33:20.422117 2238380 command_runner.go:130] > GitTreeState:     dirty
	I0911 11:33:20.422123 2238380 command_runner.go:130] > BuildDate:        2023-08-24T15:40:31Z
	I0911 11:33:20.422128 2238380 command_runner.go:130] > GoVersion:        go1.19.9
	I0911 11:33:20.422132 2238380 command_runner.go:130] > Compiler:         gc
	I0911 11:33:20.422136 2238380 command_runner.go:130] > Platform:         linux/amd64
	I0911 11:33:20.422141 2238380 command_runner.go:130] > Linkmode:         dynamic
	I0911 11:33:20.422150 2238380 command_runner.go:130] > BuildTags:        exclude_graphdriver_devicemapper, containers_image_ostree_stub, seccomp
	I0911 11:33:20.422154 2238380 command_runner.go:130] > SeccompEnabled:   true
	I0911 11:33:20.422159 2238380 command_runner.go:130] > AppArmorEnabled:  false
	I0911 11:33:20.424557 2238380 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 11:33:20.426197 2238380 out.go:177]   - env NO_PROXY=192.168.39.237
	I0911 11:33:20.427768 2238380 out.go:177]   - env NO_PROXY=192.168.39.237,192.168.39.220
	I0911 11:33:20.429237 2238380 main.go:141] libmachine: (multinode-378707-m03) Calling .GetIP
	I0911 11:33:20.432760 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:33:20.433266 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ce:30:bf", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:21:42 +0000 UTC Type:0 Mac:52:54:00:ce:30:bf Iaid: IPaddr:192.168.39.134 Prefix:24 Hostname:multinode-378707-m03 Clientid:01:52:54:00:ce:30:bf}
	I0911 11:33:20.433301 2238380 main.go:141] libmachine: (multinode-378707-m03) DBG | domain multinode-378707-m03 has defined IP address 192.168.39.134 and MAC address 52:54:00:ce:30:bf in network mk-multinode-378707
	I0911 11:33:20.433543 2238380 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 11:33:20.445445 2238380 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0911 11:33:20.445871 2238380 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707 for IP: 192.168.39.134
	I0911 11:33:20.445904 2238380 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:33:20.446157 2238380 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 11:33:20.446226 2238380 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 11:33:20.446247 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0911 11:33:20.446273 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0911 11:33:20.446291 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0911 11:33:20.446308 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0911 11:33:20.446382 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 11:33:20.446424 2238380 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 11:33:20.446443 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 11:33:20.446478 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:33:20.446512 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:33:20.446547 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 11:33:20.446603 2238380 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:33:20.446643 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:33:20.446663 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem -> /usr/share/ca-certificates/2222471.pem
	I0911 11:33:20.446683 2238380 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> /usr/share/ca-certificates/22224712.pem
	I0911 11:33:20.447231 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:33:20.528453 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 11:33:20.572533 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:33:20.599284 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:33:20.627815 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:33:20.653800 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 11:33:20.680515 2238380 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 11:33:20.705018 2238380 ssh_runner.go:195] Run: openssl version
	I0911 11:33:20.711470 2238380 command_runner.go:130] > OpenSSL 1.1.1n  15 Mar 2022
	I0911 11:33:20.711620 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:33:20.723880 2238380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:33:20.729097 2238380 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:33:20.729584 2238380 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:33:20.729644 2238380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:33:20.736201 2238380 command_runner.go:130] > b5213941
	I0911 11:33:20.737049 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:33:20.749591 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 11:33:20.763659 2238380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 11:33:20.768884 2238380 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:33:20.769060 2238380 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:33:20.769126 2238380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 11:33:20.774658 2238380 command_runner.go:130] > 51391683
	I0911 11:33:20.774920 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 11:33:20.785418 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 11:33:20.797762 2238380 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 11:33:20.802721 2238380 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:33:20.802974 2238380 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:33:20.803047 2238380 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 11:33:20.808976 2238380 command_runner.go:130] > 3ec20f2e
	I0911 11:33:20.809060 2238380 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:33:20.819822 2238380 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:33:20.824520 2238380 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:33:20.824566 2238380 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 11:33:20.824655 2238380 ssh_runner.go:195] Run: crio config
	I0911 11:33:20.887281 2238380 command_runner.go:130] ! time="2023-09-11 11:33:20.881277023Z" level=info msg="Starting CRI-O, version: 1.24.1, git: a3bbde8a77c323aa6a485da9a9046299155c6016(dirty)"
	I0911 11:33:20.887330 2238380 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0911 11:33:20.900252 2238380 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0911 11:33:20.900278 2238380 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0911 11:33:20.900286 2238380 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0911 11:33:20.900291 2238380 command_runner.go:130] > #
	I0911 11:33:20.900308 2238380 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0911 11:33:20.900319 2238380 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0911 11:33:20.900329 2238380 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0911 11:33:20.900341 2238380 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0911 11:33:20.900346 2238380 command_runner.go:130] > # reload'.
	I0911 11:33:20.900356 2238380 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0911 11:33:20.900367 2238380 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0911 11:33:20.900377 2238380 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0911 11:33:20.900388 2238380 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0911 11:33:20.900398 2238380 command_runner.go:130] > [crio]
	I0911 11:33:20.900408 2238380 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0911 11:33:20.900416 2238380 command_runner.go:130] > # containers images, in this directory.
	I0911 11:33:20.900427 2238380 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0911 11:33:20.900439 2238380 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0911 11:33:20.900447 2238380 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0911 11:33:20.900453 2238380 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0911 11:33:20.900463 2238380 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0911 11:33:20.900474 2238380 command_runner.go:130] > storage_driver = "overlay"
	I0911 11:33:20.900488 2238380 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0911 11:33:20.900501 2238380 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0911 11:33:20.900511 2238380 command_runner.go:130] > storage_option = [
	I0911 11:33:20.900524 2238380 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0911 11:33:20.900533 2238380 command_runner.go:130] > ]
	I0911 11:33:20.900548 2238380 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0911 11:33:20.900562 2238380 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0911 11:33:20.900570 2238380 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0911 11:33:20.900582 2238380 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0911 11:33:20.900593 2238380 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0911 11:33:20.900602 2238380 command_runner.go:130] > # always happen on a node reboot
	I0911 11:33:20.900607 2238380 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0911 11:33:20.900613 2238380 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0911 11:33:20.900618 2238380 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0911 11:33:20.900629 2238380 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0911 11:33:20.900642 2238380 command_runner.go:130] > # version_file_persist = "/var/lib/crio/version"
	I0911 11:33:20.900655 2238380 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0911 11:33:20.900671 2238380 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0911 11:33:20.900681 2238380 command_runner.go:130] > # internal_wipe = true
	I0911 11:33:20.900691 2238380 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0911 11:33:20.900701 2238380 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0911 11:33:20.900707 2238380 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0911 11:33:20.900720 2238380 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0911 11:33:20.900733 2238380 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0911 11:33:20.900742 2238380 command_runner.go:130] > [crio.api]
	I0911 11:33:20.900752 2238380 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0911 11:33:20.900763 2238380 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0911 11:33:20.900774 2238380 command_runner.go:130] > # IP address on which the stream server will listen.
	I0911 11:33:20.900783 2238380 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0911 11:33:20.900789 2238380 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0911 11:33:20.900802 2238380 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0911 11:33:20.900809 2238380 command_runner.go:130] > # stream_port = "0"
	I0911 11:33:20.900833 2238380 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0911 11:33:20.900843 2238380 command_runner.go:130] > # stream_enable_tls = false
	I0911 11:33:20.900855 2238380 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0911 11:33:20.900865 2238380 command_runner.go:130] > # stream_idle_timeout = ""
	I0911 11:33:20.900875 2238380 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0911 11:33:20.900888 2238380 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0911 11:33:20.900895 2238380 command_runner.go:130] > # minutes.
	I0911 11:33:20.900905 2238380 command_runner.go:130] > # stream_tls_cert = ""
	I0911 11:33:20.900919 2238380 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0911 11:33:20.900933 2238380 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0911 11:33:20.900943 2238380 command_runner.go:130] > # stream_tls_key = ""
	I0911 11:33:20.900954 2238380 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0911 11:33:20.900963 2238380 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0911 11:33:20.900975 2238380 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0911 11:33:20.900985 2238380 command_runner.go:130] > # stream_tls_ca = ""
	I0911 11:33:20.901001 2238380 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:33:20.901012 2238380 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0911 11:33:20.901027 2238380 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 16 * 1024 * 1024.
	I0911 11:33:20.901036 2238380 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0911 11:33:20.901064 2238380 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0911 11:33:20.901078 2238380 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0911 11:33:20.901084 2238380 command_runner.go:130] > [crio.runtime]
	I0911 11:33:20.901112 2238380 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0911 11:33:20.901124 2238380 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0911 11:33:20.901128 2238380 command_runner.go:130] > # "nofile=1024:2048"
	I0911 11:33:20.901136 2238380 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0911 11:33:20.901143 2238380 command_runner.go:130] > # default_ulimits = [
	I0911 11:33:20.901149 2238380 command_runner.go:130] > # ]
	I0911 11:33:20.901159 2238380 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0911 11:33:20.901166 2238380 command_runner.go:130] > # no_pivot = false
	I0911 11:33:20.901184 2238380 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0911 11:33:20.901202 2238380 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0911 11:33:20.901211 2238380 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0911 11:33:20.901221 2238380 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0911 11:33:20.901233 2238380 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0911 11:33:20.901249 2238380 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:33:20.901258 2238380 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0911 11:33:20.901350 2238380 command_runner.go:130] > # Cgroup setting for conmon
	I0911 11:33:20.901380 2238380 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0911 11:33:20.901392 2238380 command_runner.go:130] > conmon_cgroup = "pod"
	I0911 11:33:20.901405 2238380 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0911 11:33:20.901416 2238380 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0911 11:33:20.901431 2238380 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0911 11:33:20.901441 2238380 command_runner.go:130] > conmon_env = [
	I0911 11:33:20.901454 2238380 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0911 11:33:20.901463 2238380 command_runner.go:130] > ]
	I0911 11:33:20.901475 2238380 command_runner.go:130] > # Additional environment variables to set for all the
	I0911 11:33:20.901487 2238380 command_runner.go:130] > # containers. These are overridden if set in the
	I0911 11:33:20.901497 2238380 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0911 11:33:20.901505 2238380 command_runner.go:130] > # default_env = [
	I0911 11:33:20.901514 2238380 command_runner.go:130] > # ]
	I0911 11:33:20.901525 2238380 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0911 11:33:20.901535 2238380 command_runner.go:130] > # selinux = false
	I0911 11:33:20.901549 2238380 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0911 11:33:20.901564 2238380 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0911 11:33:20.901576 2238380 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0911 11:33:20.901582 2238380 command_runner.go:130] > # seccomp_profile = ""
	I0911 11:33:20.901590 2238380 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0911 11:33:20.901602 2238380 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0911 11:33:20.901616 2238380 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0911 11:33:20.901627 2238380 command_runner.go:130] > # which might increase security.
	I0911 11:33:20.901639 2238380 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0911 11:33:20.901653 2238380 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0911 11:33:20.901664 2238380 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0911 11:33:20.901682 2238380 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0911 11:33:20.901695 2238380 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0911 11:33:20.901707 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:33:20.901718 2238380 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0911 11:33:20.901731 2238380 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0911 11:33:20.901741 2238380 command_runner.go:130] > # the cgroup blockio controller.
	I0911 11:33:20.901756 2238380 command_runner.go:130] > # blockio_config_file = ""
	I0911 11:33:20.901767 2238380 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0911 11:33:20.901775 2238380 command_runner.go:130] > # irqbalance daemon.
	I0911 11:33:20.901787 2238380 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0911 11:33:20.901801 2238380 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0911 11:33:20.901813 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:33:20.901823 2238380 command_runner.go:130] > # rdt_config_file = ""
	I0911 11:33:20.901835 2238380 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0911 11:33:20.901845 2238380 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0911 11:33:20.901855 2238380 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0911 11:33:20.901864 2238380 command_runner.go:130] > # separate_pull_cgroup = ""
	I0911 11:33:20.901879 2238380 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0911 11:33:20.901893 2238380 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0911 11:33:20.901902 2238380 command_runner.go:130] > # will be added.
	I0911 11:33:20.901912 2238380 command_runner.go:130] > # default_capabilities = [
	I0911 11:33:20.901922 2238380 command_runner.go:130] > # 	"CHOWN",
	I0911 11:33:20.901931 2238380 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0911 11:33:20.901938 2238380 command_runner.go:130] > # 	"FSETID",
	I0911 11:33:20.901942 2238380 command_runner.go:130] > # 	"FOWNER",
	I0911 11:33:20.901948 2238380 command_runner.go:130] > # 	"SETGID",
	I0911 11:33:20.901958 2238380 command_runner.go:130] > # 	"SETUID",
	I0911 11:33:20.901968 2238380 command_runner.go:130] > # 	"SETPCAP",
	I0911 11:33:20.901978 2238380 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0911 11:33:20.901987 2238380 command_runner.go:130] > # 	"KILL",
	I0911 11:33:20.901995 2238380 command_runner.go:130] > # ]
	I0911 11:33:20.902009 2238380 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0911 11:33:20.902019 2238380 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:33:20.902025 2238380 command_runner.go:130] > # default_sysctls = [
	I0911 11:33:20.902031 2238380 command_runner.go:130] > # ]
	I0911 11:33:20.902042 2238380 command_runner.go:130] > # List of devices on the host that a
	I0911 11:33:20.902056 2238380 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0911 11:33:20.902065 2238380 command_runner.go:130] > # allowed_devices = [
	I0911 11:33:20.902075 2238380 command_runner.go:130] > # 	"/dev/fuse",
	I0911 11:33:20.902083 2238380 command_runner.go:130] > # ]
	I0911 11:33:20.902094 2238380 command_runner.go:130] > # List of additional devices. specified as
	I0911 11:33:20.902107 2238380 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0911 11:33:20.902118 2238380 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0911 11:33:20.902157 2238380 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0911 11:33:20.902168 2238380 command_runner.go:130] > # additional_devices = [
	I0911 11:33:20.902176 2238380 command_runner.go:130] > # ]
	I0911 11:33:20.902192 2238380 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0911 11:33:20.902201 2238380 command_runner.go:130] > # cdi_spec_dirs = [
	I0911 11:33:20.902209 2238380 command_runner.go:130] > # 	"/etc/cdi",
	I0911 11:33:20.902220 2238380 command_runner.go:130] > # 	"/var/run/cdi",
	I0911 11:33:20.902229 2238380 command_runner.go:130] > # ]
	I0911 11:33:20.902243 2238380 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0911 11:33:20.902256 2238380 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0911 11:33:20.902266 2238380 command_runner.go:130] > # Defaults to false.
	I0911 11:33:20.902276 2238380 command_runner.go:130] > # device_ownership_from_security_context = false
	I0911 11:33:20.902286 2238380 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0911 11:33:20.902297 2238380 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0911 11:33:20.902307 2238380 command_runner.go:130] > # hooks_dir = [
	I0911 11:33:20.902318 2238380 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0911 11:33:20.902326 2238380 command_runner.go:130] > # ]
	I0911 11:33:20.902341 2238380 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0911 11:33:20.902354 2238380 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0911 11:33:20.902363 2238380 command_runner.go:130] > # its default mounts from the following two files:
	I0911 11:33:20.902370 2238380 command_runner.go:130] > #
	I0911 11:33:20.902381 2238380 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0911 11:33:20.902396 2238380 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0911 11:33:20.902408 2238380 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0911 11:33:20.902416 2238380 command_runner.go:130] > #
	I0911 11:33:20.902430 2238380 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0911 11:33:20.902443 2238380 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0911 11:33:20.902452 2238380 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0911 11:33:20.902463 2238380 command_runner.go:130] > #      only add mounts it finds in this file.
	I0911 11:33:20.902472 2238380 command_runner.go:130] > #
	I0911 11:33:20.902482 2238380 command_runner.go:130] > # default_mounts_file = ""
	I0911 11:33:20.902492 2238380 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0911 11:33:20.902506 2238380 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0911 11:33:20.902515 2238380 command_runner.go:130] > pids_limit = 1024
	I0911 11:33:20.902528 2238380 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0911 11:33:20.902539 2238380 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0911 11:33:20.902553 2238380 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0911 11:33:20.902570 2238380 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0911 11:33:20.902580 2238380 command_runner.go:130] > # log_size_max = -1
	I0911 11:33:20.902595 2238380 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kuberentes log file
	I0911 11:33:20.902604 2238380 command_runner.go:130] > # log_to_journald = false
	I0911 11:33:20.902616 2238380 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0911 11:33:20.902625 2238380 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0911 11:33:20.902637 2238380 command_runner.go:130] > # Path to directory for container attach sockets.
	I0911 11:33:20.902649 2238380 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0911 11:33:20.902662 2238380 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0911 11:33:20.902670 2238380 command_runner.go:130] > # bind_mount_prefix = ""
	I0911 11:33:20.902682 2238380 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0911 11:33:20.902691 2238380 command_runner.go:130] > # read_only = false
	I0911 11:33:20.902703 2238380 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0911 11:33:20.902715 2238380 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0911 11:33:20.902726 2238380 command_runner.go:130] > # live configuration reload.
	I0911 11:33:20.902738 2238380 command_runner.go:130] > # log_level = "info"
	I0911 11:33:20.902748 2238380 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0911 11:33:20.902760 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:33:20.902769 2238380 command_runner.go:130] > # log_filter = ""
	I0911 11:33:20.902782 2238380 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0911 11:33:20.902791 2238380 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0911 11:33:20.902800 2238380 command_runner.go:130] > # separated by comma.
	I0911 11:33:20.902811 2238380 command_runner.go:130] > # uid_mappings = ""
	I0911 11:33:20.902825 2238380 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0911 11:33:20.902838 2238380 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0911 11:33:20.902851 2238380 command_runner.go:130] > # separated by comma.
	I0911 11:33:20.902861 2238380 command_runner.go:130] > # gid_mappings = ""
	I0911 11:33:20.902872 2238380 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0911 11:33:20.902882 2238380 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:33:20.902897 2238380 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:33:20.902908 2238380 command_runner.go:130] > # minimum_mappable_uid = -1
	I0911 11:33:20.902919 2238380 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0911 11:33:20.902933 2238380 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0911 11:33:20.902947 2238380 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0911 11:33:20.902957 2238380 command_runner.go:130] > # minimum_mappable_gid = -1
	I0911 11:33:20.902966 2238380 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0911 11:33:20.902980 2238380 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0911 11:33:20.902994 2238380 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0911 11:33:20.903006 2238380 command_runner.go:130] > # ctr_stop_timeout = 30
	I0911 11:33:20.903019 2238380 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0911 11:33:20.903032 2238380 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0911 11:33:20.903042 2238380 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0911 11:33:20.903050 2238380 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0911 11:33:20.903060 2238380 command_runner.go:130] > drop_infra_ctr = false
	I0911 11:33:20.903075 2238380 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0911 11:33:20.903088 2238380 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0911 11:33:20.903104 2238380 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0911 11:33:20.903114 2238380 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0911 11:33:20.903125 2238380 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0911 11:33:20.903133 2238380 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0911 11:33:20.903140 2238380 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0911 11:33:20.903157 2238380 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0911 11:33:20.903168 2238380 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0911 11:33:20.903186 2238380 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0911 11:33:20.903267 2238380 command_runner.go:130] > # The name is matched against the runtimes map below. If this value is changed,
	I0911 11:33:20.903308 2238380 command_runner.go:130] > # the corresponding existing entry from the runtimes map below will be ignored.
	I0911 11:33:20.903320 2238380 command_runner.go:130] > # default_runtime = "runc"
	I0911 11:33:20.903331 2238380 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0911 11:33:20.903362 2238380 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0911 11:33:20.903381 2238380 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jepordize the health of the node, and whose
	I0911 11:33:20.903391 2238380 command_runner.go:130] > # creation as a file is not desired either.
	I0911 11:33:20.903399 2238380 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0911 11:33:20.903408 2238380 command_runner.go:130] > # the hostname is being managed dynamically.
	I0911 11:33:20.903417 2238380 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0911 11:33:20.903426 2238380 command_runner.go:130] > # ]
	I0911 11:33:20.903440 2238380 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0911 11:33:20.903455 2238380 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0911 11:33:20.903469 2238380 command_runner.go:130] > # If no runtime handler is provided, the runtime will be picked based on the level
	I0911 11:33:20.903482 2238380 command_runner.go:130] > # of trust of the workload. Each entry in the table should follow the format:
	I0911 11:33:20.903489 2238380 command_runner.go:130] > #
	I0911 11:33:20.903501 2238380 command_runner.go:130] > #[crio.runtime.runtimes.runtime-handler]
	I0911 11:33:20.903513 2238380 command_runner.go:130] > #  runtime_path = "/path/to/the/executable"
	I0911 11:33:20.903523 2238380 command_runner.go:130] > #  runtime_type = "oci"
	I0911 11:33:20.903534 2238380 command_runner.go:130] > #  runtime_root = "/path/to/the/root"
	I0911 11:33:20.903545 2238380 command_runner.go:130] > #  privileged_without_host_devices = false
	I0911 11:33:20.903555 2238380 command_runner.go:130] > #  allowed_annotations = []
	I0911 11:33:20.903565 2238380 command_runner.go:130] > # Where:
	I0911 11:33:20.903573 2238380 command_runner.go:130] > # - runtime-handler: name used to identify the runtime
	I0911 11:33:20.903583 2238380 command_runner.go:130] > # - runtime_path (optional, string): absolute path to the runtime executable in
	I0911 11:33:20.903597 2238380 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0911 11:33:20.903614 2238380 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0911 11:33:20.903624 2238380 command_runner.go:130] > #   in $PATH.
	I0911 11:33:20.903637 2238380 command_runner.go:130] > # - runtime_type (optional, string): type of runtime, one of: "oci", "vm". If
	I0911 11:33:20.903648 2238380 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0911 11:33:20.903659 2238380 command_runner.go:130] > # - runtime_root (optional, string): root directory for storage of containers
	I0911 11:33:20.903666 2238380 command_runner.go:130] > #   state.
	I0911 11:33:20.903674 2238380 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0911 11:33:20.903684 2238380 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0911 11:33:20.903695 2238380 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0911 11:33:20.903708 2238380 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0911 11:33:20.903722 2238380 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0911 11:33:20.903736 2238380 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0911 11:33:20.903744 2238380 command_runner.go:130] > #   The currently recognized values are:
	I0911 11:33:20.903754 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0911 11:33:20.903766 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0911 11:33:20.903780 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0911 11:33:20.903794 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0911 11:33:20.903810 2238380 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0911 11:33:20.903823 2238380 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0911 11:33:20.903832 2238380 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0911 11:33:20.903841 2238380 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): if set to "container", indicates exec probes
	I0911 11:33:20.903849 2238380 command_runner.go:130] > #   should be moved to the container's cgroup
	I0911 11:33:20.903860 2238380 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0911 11:33:20.903872 2238380 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0911 11:33:20.903881 2238380 command_runner.go:130] > runtime_type = "oci"
	I0911 11:33:20.903891 2238380 command_runner.go:130] > runtime_root = "/run/runc"
	I0911 11:33:20.903903 2238380 command_runner.go:130] > runtime_config_path = ""
	I0911 11:33:20.903912 2238380 command_runner.go:130] > monitor_path = ""
	I0911 11:33:20.903918 2238380 command_runner.go:130] > monitor_cgroup = ""
	I0911 11:33:20.903925 2238380 command_runner.go:130] > monitor_exec_cgroup = ""
	I0911 11:33:20.903936 2238380 command_runner.go:130] > # crun is a fast and lightweight fully featured OCI runtime and C library for
	I0911 11:33:20.903946 2238380 command_runner.go:130] > # running containers
	I0911 11:33:20.903957 2238380 command_runner.go:130] > #[crio.runtime.runtimes.crun]
	I0911 11:33:20.903971 2238380 command_runner.go:130] > # Kata Containers is an OCI runtime, where containers are run inside lightweight
	I0911 11:33:20.904011 2238380 command_runner.go:130] > # VMs. Kata provides additional isolation towards the host, minimizing the host attack
	I0911 11:33:20.904024 2238380 command_runner.go:130] > # surface and mitigating the consequences of containers breakout.
	I0911 11:33:20.904037 2238380 command_runner.go:130] > # Kata Containers with the default configured VMM
	I0911 11:33:20.904048 2238380 command_runner.go:130] > #[crio.runtime.runtimes.kata-runtime]
	I0911 11:33:20.904059 2238380 command_runner.go:130] > # Kata Containers with the QEMU VMM
	I0911 11:33:20.904070 2238380 command_runner.go:130] > #[crio.runtime.runtimes.kata-qemu]
	I0911 11:33:20.904081 2238380 command_runner.go:130] > # Kata Containers with the Firecracker VMM
	I0911 11:33:20.904089 2238380 command_runner.go:130] > #[crio.runtime.runtimes.kata-fc]
	I0911 11:33:20.904100 2238380 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0911 11:33:20.904112 2238380 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0911 11:33:20.904126 2238380 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0911 11:33:20.904141 2238380 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0911 11:33:20.904157 2238380 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0911 11:33:20.904169 2238380 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0911 11:33:20.904182 2238380 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0911 11:33:20.904198 2238380 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0911 11:33:20.904213 2238380 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0911 11:33:20.904228 2238380 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0911 11:33:20.904237 2238380 command_runner.go:130] > # Example:
	I0911 11:33:20.904248 2238380 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0911 11:33:20.904257 2238380 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0911 11:33:20.904265 2238380 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0911 11:33:20.904273 2238380 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0911 11:33:20.904283 2238380 command_runner.go:130] > # cpuset = 0
	I0911 11:33:20.904293 2238380 command_runner.go:130] > # cpushares = "0-1"
	I0911 11:33:20.904301 2238380 command_runner.go:130] > # Where:
	I0911 11:33:20.904312 2238380 command_runner.go:130] > # The workload name is workload-type.
	I0911 11:33:20.904326 2238380 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0911 11:33:20.904343 2238380 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0911 11:33:20.904353 2238380 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0911 11:33:20.904367 2238380 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0911 11:33:20.904381 2238380 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0911 11:33:20.904389 2238380 command_runner.go:130] > # 
	I0911 11:33:20.904403 2238380 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0911 11:33:20.904412 2238380 command_runner.go:130] > #
	I0911 11:33:20.904424 2238380 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0911 11:33:20.904434 2238380 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0911 11:33:20.904447 2238380 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0911 11:33:20.904463 2238380 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0911 11:33:20.904476 2238380 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0911 11:33:20.904485 2238380 command_runner.go:130] > [crio.image]
	I0911 11:33:20.904497 2238380 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0911 11:33:20.904508 2238380 command_runner.go:130] > # default_transport = "docker://"
	I0911 11:33:20.904517 2238380 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0911 11:33:20.904523 2238380 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:33:20.904529 2238380 command_runner.go:130] > # global_auth_file = ""
	I0911 11:33:20.904535 2238380 command_runner.go:130] > # The image used to instantiate infra containers.
	I0911 11:33:20.904543 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:33:20.904555 2238380 command_runner.go:130] > pause_image = "registry.k8s.io/pause:3.9"
	I0911 11:33:20.904570 2238380 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0911 11:33:20.904582 2238380 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0911 11:33:20.904594 2238380 command_runner.go:130] > # This option supports live configuration reload.
	I0911 11:33:20.904604 2238380 command_runner.go:130] > # pause_image_auth_file = ""
	I0911 11:33:20.904614 2238380 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0911 11:33:20.904623 2238380 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0911 11:33:20.904629 2238380 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0911 11:33:20.904636 2238380 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0911 11:33:20.904641 2238380 command_runner.go:130] > # pause_command = "/pause"
	I0911 11:33:20.904651 2238380 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0911 11:33:20.904659 2238380 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0911 11:33:20.904667 2238380 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0911 11:33:20.904673 2238380 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0911 11:33:20.904681 2238380 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0911 11:33:20.904685 2238380 command_runner.go:130] > # signature_policy = ""
	I0911 11:33:20.904700 2238380 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0911 11:33:20.904714 2238380 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0911 11:33:20.904724 2238380 command_runner.go:130] > # changing them here.
	I0911 11:33:20.904731 2238380 command_runner.go:130] > # insecure_registries = [
	I0911 11:33:20.904740 2238380 command_runner.go:130] > # ]
	I0911 11:33:20.904754 2238380 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0911 11:33:20.904763 2238380 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0911 11:33:20.904767 2238380 command_runner.go:130] > # image_volumes = "mkdir"
	I0911 11:33:20.904775 2238380 command_runner.go:130] > # Temporary directory to use for storing big files
	I0911 11:33:20.904782 2238380 command_runner.go:130] > # big_files_temporary_dir = ""
	I0911 11:33:20.904788 2238380 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0911 11:33:20.904794 2238380 command_runner.go:130] > # CNI plugins.
	I0911 11:33:20.904798 2238380 command_runner.go:130] > [crio.network]
	I0911 11:33:20.904804 2238380 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0911 11:33:20.904831 2238380 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0911 11:33:20.904839 2238380 command_runner.go:130] > # cni_default_network = ""
	I0911 11:33:20.904852 2238380 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0911 11:33:20.904863 2238380 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0911 11:33:20.904876 2238380 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0911 11:33:20.904886 2238380 command_runner.go:130] > # plugin_dirs = [
	I0911 11:33:20.904895 2238380 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0911 11:33:20.904903 2238380 command_runner.go:130] > # ]
	I0911 11:33:20.904914 2238380 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0911 11:33:20.904923 2238380 command_runner.go:130] > [crio.metrics]
	I0911 11:33:20.904932 2238380 command_runner.go:130] > # Globally enable or disable metrics support.
	I0911 11:33:20.904942 2238380 command_runner.go:130] > enable_metrics = true
	I0911 11:33:20.904950 2238380 command_runner.go:130] > # Specify enabled metrics collectors.
	I0911 11:33:20.904961 2238380 command_runner.go:130] > # Per default all metrics are enabled.
	I0911 11:33:20.904975 2238380 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0911 11:33:20.904988 2238380 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0911 11:33:20.904997 2238380 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0911 11:33:20.905001 2238380 command_runner.go:130] > # metrics_collectors = [
	I0911 11:33:20.905008 2238380 command_runner.go:130] > # 	"operations",
	I0911 11:33:20.905012 2238380 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0911 11:33:20.905017 2238380 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0911 11:33:20.905021 2238380 command_runner.go:130] > # 	"operations_errors",
	I0911 11:33:20.905027 2238380 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0911 11:33:20.905034 2238380 command_runner.go:130] > # 	"image_pulls_by_name",
	I0911 11:33:20.905039 2238380 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0911 11:33:20.905045 2238380 command_runner.go:130] > # 	"image_pulls_failures",
	I0911 11:33:20.905049 2238380 command_runner.go:130] > # 	"image_pulls_successes",
	I0911 11:33:20.905055 2238380 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0911 11:33:20.905059 2238380 command_runner.go:130] > # 	"image_layer_reuse",
	I0911 11:33:20.905064 2238380 command_runner.go:130] > # 	"containers_oom_total",
	I0911 11:33:20.905068 2238380 command_runner.go:130] > # 	"containers_oom",
	I0911 11:33:20.905073 2238380 command_runner.go:130] > # 	"processes_defunct",
	I0911 11:33:20.905077 2238380 command_runner.go:130] > # 	"operations_total",
	I0911 11:33:20.905084 2238380 command_runner.go:130] > # 	"operations_latency_seconds",
	I0911 11:33:20.905088 2238380 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0911 11:33:20.905094 2238380 command_runner.go:130] > # 	"operations_errors_total",
	I0911 11:33:20.905099 2238380 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0911 11:33:20.905103 2238380 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0911 11:33:20.905110 2238380 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0911 11:33:20.905114 2238380 command_runner.go:130] > # 	"image_pulls_success_total",
	I0911 11:33:20.905120 2238380 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0911 11:33:20.905125 2238380 command_runner.go:130] > # 	"containers_oom_count_total",
	I0911 11:33:20.905132 2238380 command_runner.go:130] > # ]
	I0911 11:33:20.905137 2238380 command_runner.go:130] > # The port on which the metrics server will listen.
	I0911 11:33:20.905144 2238380 command_runner.go:130] > # metrics_port = 9090
	I0911 11:33:20.905149 2238380 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0911 11:33:20.905155 2238380 command_runner.go:130] > # metrics_socket = ""
	I0911 11:33:20.905160 2238380 command_runner.go:130] > # The certificate for the secure metrics server.
	I0911 11:33:20.905168 2238380 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0911 11:33:20.905174 2238380 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0911 11:33:20.905180 2238380 command_runner.go:130] > # certificate on any modification event.
	I0911 11:33:20.905184 2238380 command_runner.go:130] > # metrics_cert = ""
	I0911 11:33:20.905189 2238380 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0911 11:33:20.905194 2238380 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0911 11:33:20.905200 2238380 command_runner.go:130] > # metrics_key = ""
	I0911 11:33:20.905205 2238380 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0911 11:33:20.905211 2238380 command_runner.go:130] > [crio.tracing]
	I0911 11:33:20.905216 2238380 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0911 11:33:20.905224 2238380 command_runner.go:130] > # enable_tracing = false
	I0911 11:33:20.905229 2238380 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0911 11:33:20.905235 2238380 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0911 11:33:20.905240 2238380 command_runner.go:130] > # Number of samples to collect per million spans.
	I0911 11:33:20.905247 2238380 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0911 11:33:20.905253 2238380 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0911 11:33:20.905259 2238380 command_runner.go:130] > [crio.stats]
	I0911 11:33:20.905264 2238380 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0911 11:33:20.905272 2238380 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0911 11:33:20.905276 2238380 command_runner.go:130] > # stats_collection_period = 0
	I0911 11:33:20.905379 2238380 cni.go:84] Creating CNI manager for ""
	I0911 11:33:20.905389 2238380 cni.go:136] 3 nodes found, recommending kindnet
	I0911 11:33:20.905402 2238380 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:33:20.905426 2238380 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.134 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-378707 NodeName:multinode-378707-m03 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.237"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.134 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:33:20.905545 2238380 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.134
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-378707-m03"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.134
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.237"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:33:20.905611 2238380 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=multinode-378707-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.134
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:33:20.905670 2238380 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:33:20.917020 2238380 command_runner.go:130] > kubeadm
	I0911 11:33:20.917046 2238380 command_runner.go:130] > kubectl
	I0911 11:33:20.917052 2238380 command_runner.go:130] > kubelet
	I0911 11:33:20.917129 2238380 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:33:20.917207 2238380 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0911 11:33:20.928843 2238380 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (380 bytes)
	I0911 11:33:20.946417 2238380 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:33:20.964215 2238380 ssh_runner.go:195] Run: grep 192.168.39.237	control-plane.minikube.internal$ /etc/hosts
	I0911 11:33:20.968950 2238380 command_runner.go:130] > 192.168.39.237	control-plane.minikube.internal
	I0911 11:33:20.969233 2238380 host.go:66] Checking if "multinode-378707" exists ...
	I0911 11:33:20.969497 2238380 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:33:20.969823 2238380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:33:20.969884 2238380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:33:20.986607 2238380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36567
	I0911 11:33:20.987088 2238380 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:33:20.987677 2238380 main.go:141] libmachine: Using API Version  1
	I0911 11:33:20.987703 2238380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:33:20.988057 2238380 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:33:20.988244 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:33:20.988419 2238380 start.go:301] JoinCluster: &{Name:multinode-378707 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.28.1 ClusterName:multinode-378707 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.237 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.220 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ing
ress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:
SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:33:20.988585 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0911 11:33:20.988609 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:33:20.991952 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:33:20.992426 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:33:20.992462 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:33:20.992687 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:33:20.992915 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:33:20.993091 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:33:20.993234 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:33:21.184667 2238380 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token wc3sva.jhsiumgg3ee67574 --discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 11:33:21.184751 2238380 start.go:314] removing existing worker node "m03" before attempting to rejoin cluster: &{Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0911 11:33:21.184801 2238380 host.go:66] Checking if "multinode-378707" exists ...
	I0911 11:33:21.185339 2238380 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:33:21.185407 2238380 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:33:21.201018 2238380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41073
	I0911 11:33:21.201588 2238380 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:33:21.202301 2238380 main.go:141] libmachine: Using API Version  1
	I0911 11:33:21.202326 2238380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:33:21.202783 2238380 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:33:21.203027 2238380 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:33:21.203313 2238380 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-378707-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data
	I0911 11:33:21.203342 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:33:21.206026 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:33:21.206505 2238380 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:33:21.206541 2238380 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:33:21.206756 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:33:21.207005 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:33:21.207195 2238380 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:33:21.207349 2238380 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:33:21.425014 2238380 command_runner.go:130] > node/multinode-378707-m03 cordoned
	I0911 11:33:24.476098 2238380 command_runner.go:130] > pod "busybox-5bc68d56bd-xg4bx" has DeletionTimestamp older than 1 seconds, skipping
	I0911 11:33:24.476132 2238380 command_runner.go:130] > node/multinode-378707-m03 drained
	I0911 11:33:24.478703 2238380 command_runner.go:130] ! Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data.
	I0911 11:33:24.478738 2238380 command_runner.go:130] ! Warning: ignoring DaemonSet-managed Pods: kube-system/kindnet-lrktz, kube-system/kube-proxy-kwvbm
	I0911 11:33:24.478772 2238380 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl drain multinode-378707-m03 --force --grace-period=1 --skip-wait-for-delete-timeout=1 --disable-eviction --ignore-daemonsets --delete-emptydir-data --delete-local-data: (3.27542626s)
	I0911 11:33:24.478793 2238380 node.go:108] successfully drained node "m03"
	I0911 11:33:24.479300 2238380 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:33:24.479669 2238380 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:33:24.480011 2238380 request.go:1212] Request Body: {"kind":"DeleteOptions","apiVersion":"v1"}
	I0911 11:33:24.480085 2238380 round_trippers.go:463] DELETE https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m03
	I0911 11:33:24.480095 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:24.480106 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:24.480118 2238380 round_trippers.go:473]     Content-Type: application/json
	I0911 11:33:24.480131 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:24.499170 2238380 round_trippers.go:574] Response Status: 200 OK in 19 milliseconds
	I0911 11:33:24.499219 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:24.499232 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:24.499242 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:24.499251 2238380 round_trippers.go:580]     Content-Length: 171
	I0911 11:33:24.499260 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:24 GMT
	I0911 11:33:24.499269 2238380 round_trippers.go:580]     Audit-Id: 9f3f41b4-3b2d-4542-a156-36e5a833d3ca
	I0911 11:33:24.499286 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:24.499294 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:24.500640 2238380 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Success","details":{"name":"multinode-378707-m03","kind":"nodes","uid":"b37f601c-a45d-4f04-b0fa-26387559968e"}}
	I0911 11:33:24.500729 2238380 node.go:124] successfully deleted node "m03"
	I0911 11:33:24.500745 2238380 start.go:318] successfully removed existing worker node "m03" from cluster: &{Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0911 11:33:24.500781 2238380 start.go:322] trying to join worker node "m03" to cluster: &{Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0911 11:33:24.500830 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm join control-plane.minikube.internal:8443 --token wc3sva.jhsiumgg3ee67574 --discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed --ignore-preflight-errors=all --cri-socket /var/run/crio/crio.sock --node-name=multinode-378707-m03"
	I0911 11:33:24.579496 2238380 command_runner.go:130] > [preflight] Running pre-flight checks
	I0911 11:33:24.752824 2238380 command_runner.go:130] > [preflight] Reading configuration from the cluster...
	I0911 11:33:24.752863 2238380 command_runner.go:130] > [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
	I0911 11:33:24.820290 2238380 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 11:33:24.820315 2238380 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 11:33:24.820321 2238380 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0911 11:33:24.960051 2238380 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
	I0911 11:33:25.482234 2238380 command_runner.go:130] > This node has joined the cluster:
	I0911 11:33:25.482263 2238380 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0911 11:33:25.482269 2238380 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0911 11:33:25.482276 2238380 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0911 11:33:25.485204 2238380 command_runner.go:130] ! W0911 11:33:24.573217    2481 initconfiguration.go:120] Usage of CRI endpoints without URL scheme is deprecated and can cause kubelet errors in the future. Automatically prepending scheme "unix" to the "criSocket" with value "/var/run/crio/crio.sock". Please update your configuration!
	I0911 11:33:25.485226 2238380 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
	I0911 11:33:25.485233 2238380 command_runner.go:130] ! 	[WARNING Port-10250]: Port 10250 is in use
	I0911 11:33:25.485241 2238380 command_runner.go:130] ! 	[WARNING FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists
	I0911 11:33:25.485267 2238380 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0911 11:33:25.766916 2238380 start.go:303] JoinCluster complete in 4.778486362s
	I0911 11:33:25.766953 2238380 cni.go:84] Creating CNI manager for ""
	I0911 11:33:25.766961 2238380 cni.go:136] 3 nodes found, recommending kindnet
	I0911 11:33:25.767030 2238380 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0911 11:33:25.774632 2238380 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0911 11:33:25.774661 2238380 command_runner.go:130] >   Size: 2615256   	Blocks: 5112       IO Block: 4096   regular file
	I0911 11:33:25.774676 2238380 command_runner.go:130] > Device: 11h/17d	Inode: 3544        Links: 1
	I0911 11:33:25.774687 2238380 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0911 11:33:25.774697 2238380 command_runner.go:130] > Access: 2023-09-11 11:29:14.744855626 +0000
	I0911 11:33:25.774709 2238380 command_runner.go:130] > Modify: 2023-08-24 15:47:28.000000000 +0000
	I0911 11:33:25.774721 2238380 command_runner.go:130] > Change: 2023-09-11 11:29:12.801855626 +0000
	I0911 11:33:25.774728 2238380 command_runner.go:130] >  Birth: -
	I0911 11:33:25.774788 2238380 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0911 11:33:25.774802 2238380 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0911 11:33:25.796139 2238380 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0911 11:33:26.138077 2238380 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0911 11:33:26.152216 2238380 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0911 11:33:26.158979 2238380 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0911 11:33:26.175888 2238380 command_runner.go:130] > daemonset.apps/kindnet configured
	I0911 11:33:26.179015 2238380 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:33:26.179429 2238380 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:33:26.179908 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0911 11:33:26.179928 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.179936 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.179945 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.182806 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:33:26.182829 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.182838 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.182847 2238380 round_trippers.go:580]     Audit-Id: fc53b23e-4418-4b4e-9e8e-ec1f26eccf40
	I0911 11:33:26.182855 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.182864 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.182874 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.182884 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.182897 2238380 round_trippers.go:580]     Content-Length: 291
	I0911 11:33:26.182931 2238380 request.go:1212] Response Body: {"kind":"Scale","apiVersion":"autoscaling/v1","metadata":{"name":"coredns","namespace":"kube-system","uid":"64cee900-5094-4f8d-89de-d10f65816cce","resourceVersion":"897","creationTimestamp":"2023-09-11T11:19:21Z"},"spec":{"replicas":1},"status":{"replicas":1,"selector":"k8s-app=kube-dns"}}
	I0911 11:33:26.183042 2238380 kapi.go:248] "coredns" deployment in "kube-system" namespace and "multinode-378707" context rescaled to 1 replicas
	I0911 11:33:26.183078 2238380 start.go:223] Will wait 6m0s for node &{Name:m03 IP:192.168.39.134 Port:0 KubernetesVersion:v1.28.1 ContainerRuntime: ControlPlane:false Worker:true}
	I0911 11:33:26.186489 2238380 out.go:177] * Verifying Kubernetes components...
	I0911 11:33:26.188258 2238380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:33:26.202728 2238380 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:33:26.202972 2238380 kapi.go:59] client config for multinode-378707: &rest.Config{Host:"https://192.168.39.237:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/multinode-378707/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil
), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:33:26.203215 2238380 node_ready.go:35] waiting up to 6m0s for node "multinode-378707-m03" to be "Ready" ...
	I0911 11:33:26.203284 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m03
	I0911 11:33:26.203292 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.203299 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.203306 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.206040 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:33:26.206063 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.206070 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.206076 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.206084 2238380 round_trippers.go:580]     Audit-Id: aa5da3f2-a0fa-4f42-85df-57cef66563cb
	I0911 11:33:26.206093 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.206101 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.206117 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.206218 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m03","uid":"fd0aa51d-4a71-4484-b43b-5c225c8665bf","resourceVersion":"1242","creationTimestamp":"2023-09-11T11:33:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:33:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:33:25Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0911 11:33:26.206580 2238380 node_ready.go:49] node "multinode-378707-m03" has status "Ready":"True"
	I0911 11:33:26.206601 2238380 node_ready.go:38] duration metric: took 3.370656ms waiting for node "multinode-378707-m03" to be "Ready" ...
	I0911 11:33:26.206609 2238380 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:33:26.206674 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods
	I0911 11:33:26.206682 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.206690 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.206696 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.211555 2238380 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0911 11:33:26.211573 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.211579 2238380 round_trippers.go:580]     Audit-Id: df5d0cc0-9f29-4500-b6a2-266c24d2adf1
	I0911 11:33:26.211585 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.211591 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.211596 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.211604 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.211612 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.212645 2238380 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1246"},"items":[{"metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"893","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"
f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers": [truncated 81934 chars]
	I0911 11:33:26.215387 2238380 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:26.215484 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-fzpjk
	I0911 11:33:26.215494 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.215501 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.215507 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.218371 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:33:26.218394 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.218403 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.218412 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.218419 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.218432 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.218446 2238380 round_trippers.go:580]     Audit-Id: 4915b53d-5ba0-4011-bff2-c2b3ef5a8d48
	I0911 11:33:26.218458 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.218847 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-5dd5756b68-fzpjk","generateName":"coredns-5dd5756b68-","namespace":"kube-system","uid":"f72f6ba0-92a3-4108-a37f-e6ad5009c37c","resourceVersion":"893","creationTimestamp":"2023-09-11T11:19:36Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"5dd5756b68"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-5dd5756b68","uid":"b3cb30cb-facf-4710-8066-4e08fbc5dc89","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:36Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b3cb30cb-facf-4710-8066-4e08fbc5dc89\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":{
}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f:i [truncated 6265 chars]
	I0911 11:33:26.219316 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:33:26.219330 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.219337 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.219345 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.221787 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:33:26.221809 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.221819 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.221828 2238380 round_trippers.go:580]     Audit-Id: 521cc1ee-df0e-4115-a833-a2bdf8c90960
	I0911 11:33:26.221849 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.221862 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.221871 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.221883 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.222030 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0911 11:33:26.222353 2238380 pod_ready.go:92] pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace has status "Ready":"True"
	I0911 11:33:26.222368 2238380 pod_ready.go:81] duration metric: took 6.956733ms waiting for pod "coredns-5dd5756b68-fzpjk" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:26.222378 2238380 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:26.222444 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-378707
	I0911 11:33:26.222452 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.222459 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.222466 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.225067 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:33:26.225091 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.225102 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.225112 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.225121 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.225135 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.225152 2238380 round_trippers.go:580]     Audit-Id: 9f57df6c-7fb7-4111-bcd4-2527dd5ec9bc
	I0911 11:33:26.225165 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.225309 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-378707","namespace":"kube-system","uid":"30882221-42a4-42a4-9911-63a8ff26c903","resourceVersion":"885","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.168.39.237:2379","kubernetes.io/config.hash":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.mirror":"301ff3085dd9ceb3eda8ae352974f3c3","kubernetes.io/config.seen":"2023-09-11T11:19:21.954681050Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-
client-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/confi [truncated 5853 chars]
	I0911 11:33:26.225700 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:33:26.225713 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.225722 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.225730 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.227814 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:33:26.227835 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.227846 2238380 round_trippers.go:580]     Audit-Id: 0e334209-5d5f-4952-aba1-e6b66400b1e1
	I0911 11:33:26.227854 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.227860 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.227866 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.227874 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.227880 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.228024 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0911 11:33:26.228339 2238380 pod_ready.go:92] pod "etcd-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:33:26.228352 2238380 pod_ready.go:81] duration metric: took 5.96108ms waiting for pod "etcd-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:26.228371 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:26.228429 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-378707
	I0911 11:33:26.228437 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.228443 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.228451 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.230869 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:33:26.230901 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.230911 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.230920 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.230930 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.230942 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.230949 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.230965 2238380 round_trippers.go:580]     Audit-Id: a1d42b0e-877a-4fc1-ada8-b5bd531fb393
	I0911 11:33:26.231204 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-378707","namespace":"kube-system","uid":"6cc96039-3a17-4243-93b6-4bf3ed6f69a8","resourceVersion":"861","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.168.39.237:8443","kubernetes.io/config.hash":"4ac3958118ce3f6e7dda52fe654787ec","kubernetes.io/config.mirror":"4ac3958118ce3f6e7dda52fe654787ec","kubernetes.io/config.seen":"2023-09-11T11:19:21.954683933Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.ku
bernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes [truncated 7389 chars]
	I0911 11:33:26.231712 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:33:26.231726 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.231733 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.231739 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.233737 2238380 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:33:26.233752 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.233759 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.233764 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.233770 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.233775 2238380 round_trippers.go:580]     Audit-Id: a1a3313b-221b-40c2-bcc7-e339db75cf98
	I0911 11:33:26.233783 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.233790 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.234153 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0911 11:33:26.234551 2238380 pod_ready.go:92] pod "kube-apiserver-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:33:26.234574 2238380 pod_ready.go:81] duration metric: took 6.190383ms waiting for pod "kube-apiserver-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:26.234586 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:26.234654 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-378707
	I0911 11:33:26.234665 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.234694 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.234707 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.236907 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:33:26.236925 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.236934 2238380 round_trippers.go:580]     Audit-Id: 354e8007-5a63-4ff8-b243-a0ff6d12f844
	I0911 11:33:26.236941 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.236950 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.236959 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.236972 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.236985 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.237169 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-378707","namespace":"kube-system","uid":"7bd2ecf1-1558-4680-9075-d30d989a0568","resourceVersion":"859","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"ee5490370c5fc8b73824fd7337130039","kubernetes.io/config.mirror":"ee5490370c5fc8b73824fd7337130039","kubernetes.io/config.seen":"2023-09-11T11:19:21.954684910Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.i
o/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".":{ [truncated 6954 chars]
	I0911 11:33:26.237580 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:33:26.237591 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.237602 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.237614 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.239582 2238380 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0911 11:33:26.239597 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.239604 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.239610 2238380 round_trippers.go:580]     Audit-Id: 38441c3c-2113-4626-88b8-3de92f632120
	I0911 11:33:26.239618 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.239626 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.239632 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.239639 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.239778 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0911 11:33:26.240171 2238380 pod_ready.go:92] pod "kube-controller-manager-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:33:26.240191 2238380 pod_ready.go:81] duration metric: took 5.596628ms waiting for pod "kube-controller-manager-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:26.240204 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8gcxx" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:26.403559 2238380 request.go:629] Waited for 163.272979ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gcxx
	I0911 11:33:26.403628 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8gcxx
	I0911 11:33:26.403633 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.403641 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.403649 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.406392 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:33:26.406430 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.406441 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.406450 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.406467 2238380 round_trippers.go:580]     Audit-Id: 5ddc413c-eeee-48a3-a84e-9410d1fcddc1
	I0911 11:33:26.406475 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.406484 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.406497 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.406792 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8gcxx","generateName":"kube-proxy-","namespace":"kube-system","uid":"f1bb96ad-eb2c-4eeb-b1f8-abb67568b5e7","resourceVersion":"1033","creationTimestamp":"2023-09-11T11:20:17Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:20:17Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0911 11:33:26.603475 2238380 request.go:629] Waited for 196.21172ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:33:26.603560 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m02
	I0911 11:33:26.603565 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.603573 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.603580 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.606205 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:33:26.606229 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.606236 2238380 round_trippers.go:580]     Audit-Id: 897228fb-81b1-472b-92a5-8b1c458c0055
	I0911 11:33:26.606242 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.606247 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.606252 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.606257 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.606275 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.606561 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m02","uid":"45a0aa36-7b9f-42cb-bb77-1d667e90ffbf","resourceVersion":"1055","creationTimestamp":"2023-09-11T11:31:44Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m02","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:31:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.1.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:31:44Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0911 11:33:26.607019 2238380 pod_ready.go:92] pod "kube-proxy-8gcxx" in "kube-system" namespace has status "Ready":"True"
	I0911 11:33:26.607040 2238380 pod_ready.go:81] duration metric: took 366.825943ms waiting for pod "kube-proxy-8gcxx" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:26.607051 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-kwvbm" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:26.803376 2238380 request.go:629] Waited for 196.238163ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kwvbm
	I0911 11:33:26.803456 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kwvbm
	I0911 11:33:26.803461 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:26.803470 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:26.803477 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:26.806840 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:33:26.806861 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:26.806868 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:26.806874 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:26.806880 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:26.806885 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:26 GMT
	I0911 11:33:26.806891 2238380 round_trippers.go:580]     Audit-Id: 0310a06f-fb99-4aa2-9268-de12508df4bb
	I0911 11:33:26.806896 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:26.807255 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-kwvbm","generateName":"kube-proxy-","namespace":"kube-system","uid":"6a1764e3-ef89-4687-874e-03baf3e90296","resourceVersion":"1213","creationTimestamp":"2023-09-11T11:21:07Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:21:07Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5731 chars]
	I0911 11:33:27.004226 2238380 request.go:629] Waited for 196.442635ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m03
	I0911 11:33:27.004296 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707-m03
	I0911 11:33:27.004301 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:27.004310 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:27.004316 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:27.007241 2238380 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0911 11:33:27.007272 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:27.007283 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:27 GMT
	I0911 11:33:27.007301 2238380 round_trippers.go:580]     Audit-Id: e291de8c-3e4a-47f8-bf8b-94a08a46942f
	I0911 11:33:27.007310 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:27.007318 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:27.007326 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:27.007336 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:27.007772 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707-m03","uid":"fd0aa51d-4a71-4484-b43b-5c225c8665bf","resourceVersion":"1242","creationTimestamp":"2023-09-11T11:33:25Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707-m03","kubernetes.io/os":"linux"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"/var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:33:25Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:node.alpha.kubernetes.io/ttl":{}}},"f:spec":{"f:podCIDR":{},"f:podCIDRs":{".":{},"v:\"10.244.2.0/24\"":{}}}}},{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:33:25Z","fieldsTy
pe":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{"f:kubeadm.al [truncated 3442 chars]
	I0911 11:33:27.008048 2238380 pod_ready.go:92] pod "kube-proxy-kwvbm" in "kube-system" namespace has status "Ready":"True"
	I0911 11:33:27.008069 2238380 pod_ready.go:81] duration metric: took 401.010707ms waiting for pod "kube-proxy-kwvbm" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:27.008080 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-snbc8" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:27.203469 2238380 request.go:629] Waited for 195.298544ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-snbc8
	I0911 11:33:27.203571 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-proxy-snbc8
	I0911 11:33:27.203584 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:27.203595 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:27.203608 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:27.210463 2238380 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0911 11:33:27.210487 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:27.210495 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:27.210500 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:27.210506 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:27.210512 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:27.210517 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:27 GMT
	I0911 11:33:27.210522 2238380 round_trippers.go:580]     Audit-Id: 9f7eca2c-3031-4741-9928-db7533e885e3
	I0911 11:33:27.211565 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-snbc8","generateName":"kube-proxy-","namespace":"kube-system","uid":"c3bb9995-3cd6-4433-a326-3da0a7f4aff3","resourceVersion":"826","creationTimestamp":"2023-09-11T11:19:35Z","labels":{"controller-revision-hash":"5d69f4f5b5","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"aab51725-0f7d-4259-9d77-6ee9f268697e","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"aab51725-0f7d-4259-9d77-6ee9f268697e\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5518 chars]
	I0911 11:33:27.403411 2238380 request.go:629] Waited for 191.29182ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:33:27.403489 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:33:27.403494 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:27.403502 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:27.403509 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:27.411476 2238380 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0911 11:33:27.411508 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:27.411519 2238380 round_trippers.go:580]     Audit-Id: 34eb413f-4044-4f18-9132-80ad2d5cd4e8
	I0911 11:33:27.411529 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:27.411537 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:27.411545 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:27.411554 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:27.411562 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:27 GMT
	I0911 11:33:27.411706 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0911 11:33:27.412172 2238380 pod_ready.go:92] pod "kube-proxy-snbc8" in "kube-system" namespace has status "Ready":"True"
	I0911 11:33:27.412210 2238380 pod_ready.go:81] duration metric: took 404.123236ms waiting for pod "kube-proxy-snbc8" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:27.412231 2238380 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:27.603754 2238380 request.go:629] Waited for 191.419409ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-378707
	I0911 11:33:27.603826 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-378707
	I0911 11:33:27.603836 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:27.603847 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:27.603860 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:27.606975 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:33:27.607010 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:27.607022 2238380 round_trippers.go:580]     Audit-Id: b47f8415-00e1-465d-8c14-76425c36934c
	I0911 11:33:27.607031 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:27.607039 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:27.607048 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:27.607058 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:27.607067 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:27 GMT
	I0911 11:33:27.607168 2238380 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-378707","namespace":"kube-system","uid":"51055ddb-deff-4b5d-9a90-0cd2c9dc8aa7","resourceVersion":"867","creationTimestamp":"2023-09-11T11:19:22Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"47ac46ded21e848957a0f2d3767001da","kubernetes.io/config.mirror":"47ac46ded21e848957a0f2d3767001da","kubernetes.io/config.seen":"2023-09-11T11:19:21.954685589Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2023-09-11T11:19:22Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{},
"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{} [truncated 4684 chars]
	I0911 11:33:27.803635 2238380 request.go:629] Waited for 195.978002ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:33:27.803700 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes/multinode-378707
	I0911 11:33:27.803707 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:27.803715 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:27.803722 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:27.807034 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:33:27.807063 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:27.807074 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:27.807082 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:27.807091 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:27.807099 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:27 GMT
	I0911 11:33:27.807109 2238380 round_trippers.go:580]     Audit-Id: bc6ffb2b-0510-4cd9-a11f-0b1898a064fe
	I0911 11:33:27.807122 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:27.807718 2238380 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","apiVe
rsion":"v1","time":"2023-09-11T11:19:18Z","fieldsType":"FieldsV1","fiel [truncated 6213 chars]
	I0911 11:33:27.808068 2238380 pod_ready.go:92] pod "kube-scheduler-multinode-378707" in "kube-system" namespace has status "Ready":"True"
	I0911 11:33:27.808082 2238380 pod_ready.go:81] duration metric: took 395.844771ms waiting for pod "kube-scheduler-multinode-378707" in "kube-system" namespace to be "Ready" ...
	I0911 11:33:27.808093 2238380 pod_ready.go:38] duration metric: took 1.601471575s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:33:27.808113 2238380 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:33:27.808171 2238380 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:33:27.822151 2238380 system_svc.go:56] duration metric: took 14.019787ms WaitForService to wait for kubelet.
	I0911 11:33:27.822177 2238380 kubeadm.go:581] duration metric: took 1.639065507s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:33:27.822203 2238380 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:33:28.004015 2238380 request.go:629] Waited for 181.70176ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.237:8443/api/v1/nodes
	I0911 11:33:28.004087 2238380 round_trippers.go:463] GET https://192.168.39.237:8443/api/v1/nodes
	I0911 11:33:28.004091 2238380 round_trippers.go:469] Request Headers:
	I0911 11:33:28.004100 2238380 round_trippers.go:473]     Accept: application/json, */*
	I0911 11:33:28.004106 2238380 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0911 11:33:28.007334 2238380 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0911 11:33:28.007362 2238380 round_trippers.go:577] Response Headers:
	I0911 11:33:28.007370 2238380 round_trippers.go:580]     Audit-Id: 756be892-88a0-4659-ba37-f4cc18b43979
	I0911 11:33:28.007376 2238380 round_trippers.go:580]     Cache-Control: no-cache, private
	I0911 11:33:28.007382 2238380 round_trippers.go:580]     Content-Type: application/json
	I0911 11:33:28.007387 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 0bab608e-7857-4c98-ba40-1408f288fc5c
	I0911 11:33:28.007392 2238380 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: 9f7ea96b-75ce-4082-a3d1-5fca3ea6c85d
	I0911 11:33:28.007398 2238380 round_trippers.go:580]     Date: Mon, 11 Sep 2023 11:33:28 GMT
	I0911 11:33:28.007596 2238380 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1255"},"items":[{"metadata":{"name":"multinode-378707","uid":"c74216c7-600a-4c91-811c-9d3ad80f86b0","resourceVersion":"912","creationTimestamp":"2023-09-11T11:19:18Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-378707","kubernetes.io/os":"linux","minikube.k8s.io/commit":"58460de6978298fe1c37b30354468f3a287d03e9","minikube.k8s.io/name":"multinode-378707","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2023_09_11T11_19_23_0700","minikube.k8s.io/version":"v1.31.2","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/crio/crio.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedField
s":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time": [truncated 15135 chars]
	I0911 11:33:28.008214 2238380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:33:28.008234 2238380 node_conditions.go:123] node cpu capacity is 2
	I0911 11:33:28.008244 2238380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:33:28.008248 2238380 node_conditions.go:123] node cpu capacity is 2
	I0911 11:33:28.008251 2238380 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:33:28.008254 2238380 node_conditions.go:123] node cpu capacity is 2
	I0911 11:33:28.008257 2238380 node_conditions.go:105] duration metric: took 186.050667ms to run NodePressure ...
	I0911 11:33:28.008267 2238380 start.go:228] waiting for startup goroutines ...
	I0911 11:33:28.008285 2238380 start.go:242] writing updated cluster config ...
	I0911 11:33:28.008599 2238380 ssh_runner.go:195] Run: rm -f paused
	I0911 11:33:28.063634 2238380 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 11:33:28.066387 2238380 out.go:177] * Done! kubectl is now configured to use "multinode-378707" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 11:29:13 UTC, ends at Mon 2023-09-11 11:33:29 UTC. --
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.167142091Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8f081d1f-343e-4ea1-8af1-88015c7fc441 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.167553528Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4adee9f1f5ee42078c0916641a82603b08e4b3838748b58f270ded35fba84e4c,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431820653805489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e558ca4ad4c98ce433905e9be56903b97b3c5378ff29ad9a187a629a72257f55,PodSandboxId:18cbb830d03f1ba9e04ccda74d079644e73ab4cfbd35d760a3a2d587f49a7431,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431805393303376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff221444babdd468400f75b0ae80b79f207416fa0c0fa6def2d4656562ce0335,PodSandboxId:bdb6df1ab0dcc5206a40f6e62da2713b358d469bbe8f6f2a496fad16e73a20a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431803881548269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8da81bd55a5d80bb46eae224860e0cc6ca0a8a546263589c87c44ff7d177037,PodSandboxId:98a44ed64c58f89dcce68253d869763ac511077bd7f130d8d789cee7d4beb7ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431791847594782,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d6a07ff91f1915c2fcd5c31e79af6ede720d3d5b0fee43d62d37e09192d64d,PodSandboxId:5731b74f39101086bf1067f790937d6aafda4faa71d780c9a04ad0201d7e20d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431790024080387,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d3cf5755b68babda7130ba0257eb989606e750166a172e0479bf321490a7d0,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694431790116677406,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c9
3e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b213e56c71475baea8039114d7536c5736ca4006ed3de51c07170b9b18e2d1e,PodSandboxId:b01b6b3c5ecaec3319765e77aef661beb4e6ed5a246af41c4729b02696d54b2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431781993089957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Annot
ations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e812b362e135991fb4e68939d03f7b8e4761664a78ce073ddea73488005cacd4,PodSandboxId:34a875945d75034bb17b3d0db683d06996544981b179037c57b361d1d8c1f249,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431781844017662,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.has
h: 91e53050,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44ede6cdf606d4bf867fe80627ae2d9e19a86baeebd710d446f2e2e92932f58,PodSandboxId:b9ec844a99bbe4a0934f0e67560ffa11e912ea116845b410dc6d80eabc681593,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431781592652052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe332096,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ceefa9b07e3570a1fd811479be123ba8746d68b467f74d672ed5907bd9d2c81,PodSandboxId:7781db287d00dc84f2644b73aeb55a1552ba267481bf79c25ae2ce444f17206c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431781454088306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8f081d1f-343e-4ea1-8af1-88015c7fc441 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.210984933Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5cab414d-8f24-4b32-a33f-99b720560fc8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.211082384Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5cab414d-8f24-4b32-a33f-99b720560fc8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.211724070Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4adee9f1f5ee42078c0916641a82603b08e4b3838748b58f270ded35fba84e4c,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431820653805489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e558ca4ad4c98ce433905e9be56903b97b3c5378ff29ad9a187a629a72257f55,PodSandboxId:18cbb830d03f1ba9e04ccda74d079644e73ab4cfbd35d760a3a2d587f49a7431,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431805393303376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff221444babdd468400f75b0ae80b79f207416fa0c0fa6def2d4656562ce0335,PodSandboxId:bdb6df1ab0dcc5206a40f6e62da2713b358d469bbe8f6f2a496fad16e73a20a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431803881548269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8da81bd55a5d80bb46eae224860e0cc6ca0a8a546263589c87c44ff7d177037,PodSandboxId:98a44ed64c58f89dcce68253d869763ac511077bd7f130d8d789cee7d4beb7ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431791847594782,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d6a07ff91f1915c2fcd5c31e79af6ede720d3d5b0fee43d62d37e09192d64d,PodSandboxId:5731b74f39101086bf1067f790937d6aafda4faa71d780c9a04ad0201d7e20d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431790024080387,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d3cf5755b68babda7130ba0257eb989606e750166a172e0479bf321490a7d0,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694431790116677406,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c9
3e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b213e56c71475baea8039114d7536c5736ca4006ed3de51c07170b9b18e2d1e,PodSandboxId:b01b6b3c5ecaec3319765e77aef661beb4e6ed5a246af41c4729b02696d54b2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431781993089957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Annot
ations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e812b362e135991fb4e68939d03f7b8e4761664a78ce073ddea73488005cacd4,PodSandboxId:34a875945d75034bb17b3d0db683d06996544981b179037c57b361d1d8c1f249,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431781844017662,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.has
h: 91e53050,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44ede6cdf606d4bf867fe80627ae2d9e19a86baeebd710d446f2e2e92932f58,PodSandboxId:b9ec844a99bbe4a0934f0e67560ffa11e912ea116845b410dc6d80eabc681593,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431781592652052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe332096,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ceefa9b07e3570a1fd811479be123ba8746d68b467f74d672ed5907bd9d2c81,PodSandboxId:7781db287d00dc84f2644b73aeb55a1552ba267481bf79c25ae2ce444f17206c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431781454088306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5cab414d-8f24-4b32-a33f-99b720560fc8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.260053482Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=8be9f260-9a76-459f-9b7d-49f03b88b846 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.260155089Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=8be9f260-9a76-459f-9b7d-49f03b88b846 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.260493532Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4adee9f1f5ee42078c0916641a82603b08e4b3838748b58f270ded35fba84e4c,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431820653805489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e558ca4ad4c98ce433905e9be56903b97b3c5378ff29ad9a187a629a72257f55,PodSandboxId:18cbb830d03f1ba9e04ccda74d079644e73ab4cfbd35d760a3a2d587f49a7431,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431805393303376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff221444babdd468400f75b0ae80b79f207416fa0c0fa6def2d4656562ce0335,PodSandboxId:bdb6df1ab0dcc5206a40f6e62da2713b358d469bbe8f6f2a496fad16e73a20a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431803881548269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8da81bd55a5d80bb46eae224860e0cc6ca0a8a546263589c87c44ff7d177037,PodSandboxId:98a44ed64c58f89dcce68253d869763ac511077bd7f130d8d789cee7d4beb7ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431791847594782,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d6a07ff91f1915c2fcd5c31e79af6ede720d3d5b0fee43d62d37e09192d64d,PodSandboxId:5731b74f39101086bf1067f790937d6aafda4faa71d780c9a04ad0201d7e20d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431790024080387,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d3cf5755b68babda7130ba0257eb989606e750166a172e0479bf321490a7d0,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694431790116677406,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c9
3e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b213e56c71475baea8039114d7536c5736ca4006ed3de51c07170b9b18e2d1e,PodSandboxId:b01b6b3c5ecaec3319765e77aef661beb4e6ed5a246af41c4729b02696d54b2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431781993089957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Annot
ations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e812b362e135991fb4e68939d03f7b8e4761664a78ce073ddea73488005cacd4,PodSandboxId:34a875945d75034bb17b3d0db683d06996544981b179037c57b361d1d8c1f249,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431781844017662,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.has
h: 91e53050,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44ede6cdf606d4bf867fe80627ae2d9e19a86baeebd710d446f2e2e92932f58,PodSandboxId:b9ec844a99bbe4a0934f0e67560ffa11e912ea116845b410dc6d80eabc681593,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431781592652052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe332096,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ceefa9b07e3570a1fd811479be123ba8746d68b467f74d672ed5907bd9d2c81,PodSandboxId:7781db287d00dc84f2644b73aeb55a1552ba267481bf79c25ae2ce444f17206c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431781454088306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8be9f260-9a76-459f-9b7d-49f03b88b846 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.311837207Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b564992d-c1e4-413e-9afd-b4845aa36524 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.312173390Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b564992d-c1e4-413e-9afd-b4845aa36524 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.312673048Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4adee9f1f5ee42078c0916641a82603b08e4b3838748b58f270ded35fba84e4c,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431820653805489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e558ca4ad4c98ce433905e9be56903b97b3c5378ff29ad9a187a629a72257f55,PodSandboxId:18cbb830d03f1ba9e04ccda74d079644e73ab4cfbd35d760a3a2d587f49a7431,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431805393303376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff221444babdd468400f75b0ae80b79f207416fa0c0fa6def2d4656562ce0335,PodSandboxId:bdb6df1ab0dcc5206a40f6e62da2713b358d469bbe8f6f2a496fad16e73a20a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431803881548269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8da81bd55a5d80bb46eae224860e0cc6ca0a8a546263589c87c44ff7d177037,PodSandboxId:98a44ed64c58f89dcce68253d869763ac511077bd7f130d8d789cee7d4beb7ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431791847594782,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d6a07ff91f1915c2fcd5c31e79af6ede720d3d5b0fee43d62d37e09192d64d,PodSandboxId:5731b74f39101086bf1067f790937d6aafda4faa71d780c9a04ad0201d7e20d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431790024080387,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d3cf5755b68babda7130ba0257eb989606e750166a172e0479bf321490a7d0,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694431790116677406,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c9
3e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b213e56c71475baea8039114d7536c5736ca4006ed3de51c07170b9b18e2d1e,PodSandboxId:b01b6b3c5ecaec3319765e77aef661beb4e6ed5a246af41c4729b02696d54b2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431781993089957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Annot
ations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e812b362e135991fb4e68939d03f7b8e4761664a78ce073ddea73488005cacd4,PodSandboxId:34a875945d75034bb17b3d0db683d06996544981b179037c57b361d1d8c1f249,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431781844017662,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.has
h: 91e53050,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44ede6cdf606d4bf867fe80627ae2d9e19a86baeebd710d446f2e2e92932f58,PodSandboxId:b9ec844a99bbe4a0934f0e67560ffa11e912ea116845b410dc6d80eabc681593,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431781592652052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe332096,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ceefa9b07e3570a1fd811479be123ba8746d68b467f74d672ed5907bd9d2c81,PodSandboxId:7781db287d00dc84f2644b73aeb55a1552ba267481bf79c25ae2ce444f17206c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431781454088306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b564992d-c1e4-413e-9afd-b4845aa36524 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.336149326Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=c87c24d9-a538-4a15-923c-33e7d5d7d4c6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.336598462Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:18cbb830d03f1ba9e04ccda74d079644e73ab4cfbd35d760a3a2d587f49a7431,Metadata:&PodSandboxMetadata{Name:busybox-5bc68d56bd-4jnst,Uid:6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431803453346878,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,pod-template-hash: 5bc68d56bd,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:29:47.392928607Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:bdb6df1ab0dcc5206a40f6e62da2713b358d469bbe8f6f2a496fad16e73a20a0,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-fzpjk,Uid:f72f6ba0-92a3-4108-a37f-e6ad5009c37c,Namespace:kube-system,Attempt:0
,},State:SANDBOX_READY,CreatedAt:1694431803155817176,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:29:47.392933643Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5731b74f39101086bf1067f790937d6aafda4faa71d780c9a04ad0201d7e20d5,Metadata:&PodSandboxMetadata{Name:kube-proxy-snbc8,Uid:c3bb9995-3cd6-4433-a326-3da0a7f4aff3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431788667922016,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7f4aff3,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]strin
g{kubernetes.io/config.seen: 2023-09-11T11:29:47.392926784Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:98a44ed64c58f89dcce68253d869763ac511077bd7f130d8d789cee7d4beb7ea,Metadata:&PodSandboxMetadata{Name:kindnet-gxpnd,Uid:e59da67c-e818-45db-bbcd-db99a4310bf1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431788659213112,Labels:map[string]string{app: kindnet,controller-revision-hash: 77b9cf4878,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e59da67c-e818-45db-bbcd-db99a4310bf1,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:29:47.392931076Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:77e1a93d-fc34-4f05-8320-169bb6c93e46,Namespace:kube-system,Attempt:0,},Sta
te:SANDBOX_READY,CreatedAt:1694431788642000239,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/t
mp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-11T11:29:47.392932530Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:34a875945d75034bb17b3d0db683d06996544981b179037c57b361d1d8c1f249,Metadata:&PodSandboxMetadata{Name:etcd-multinode-378707,Uid:301ff3085dd9ceb3eda8ae352974f3c3,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431780941995503,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.237:2379,kubernetes.io/config.hash: 301ff3085dd9ceb3eda8ae352974f3c3,kubernetes.io/config.seen: 2023-09-11T11:29:40.352957629Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b9ec844a99bbe4a0934f0e67560ffa11e912ea116845b410dc6d80eabc681593,Meta
data:&PodSandboxMetadata{Name:kube-apiserver-multinode-378707,Uid:4ac3958118ce3f6e7dda52fe654787ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431780941395430,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.237:8443,kubernetes.io/config.hash: 4ac3958118ce3f6e7dda52fe654787ec,kubernetes.io/config.seen: 2023-09-11T11:29:40.352958686Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b01b6b3c5ecaec3319765e77aef661beb4e6ed5a246af41c4729b02696d54b2c,Metadata:&PodSandboxMetadata{Name:kube-scheduler-multinode-378707,Uid:47ac46ded21e848957a0f2d3767001da,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431780909154141,Labels:map[string]string{co
mponent: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 47ac46ded21e848957a0f2d3767001da,kubernetes.io/config.seen: 2023-09-11T11:29:40.352954652Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:7781db287d00dc84f2644b73aeb55a1552ba267481bf79c25ae2ce444f17206c,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-multinode-378707,Uid:ee5490370c5fc8b73824fd7337130039,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694431780887698836,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,tier: control-plane,},Annotations:map[string]string{kuber
netes.io/config.hash: ee5490370c5fc8b73824fd7337130039,kubernetes.io/config.seen: 2023-09-11T11:29:40.352959775Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=c87c24d9-a538-4a15-923c-33e7d5d7d4c6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.337569740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3cba63b8-a9b2-4562-85e4-4c172d3737a7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.337671641Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3cba63b8-a9b2-4562-85e4-4c172d3737a7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.338078943Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4adee9f1f5ee42078c0916641a82603b08e4b3838748b58f270ded35fba84e4c,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431820653805489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e558ca4ad4c98ce433905e9be56903b97b3c5378ff29ad9a187a629a72257f55,PodSandboxId:18cbb830d03f1ba9e04ccda74d079644e73ab4cfbd35d760a3a2d587f49a7431,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431805393303376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff221444babdd468400f75b0ae80b79f207416fa0c0fa6def2d4656562ce0335,PodSandboxId:bdb6df1ab0dcc5206a40f6e62da2713b358d469bbe8f6f2a496fad16e73a20a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431803881548269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8da81bd55a5d80bb46eae224860e0cc6ca0a8a546263589c87c44ff7d177037,PodSandboxId:98a44ed64c58f89dcce68253d869763ac511077bd7f130d8d789cee7d4beb7ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431791847594782,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d6a07ff91f1915c2fcd5c31e79af6ede720d3d5b0fee43d62d37e09192d64d,PodSandboxId:5731b74f39101086bf1067f790937d6aafda4faa71d780c9a04ad0201d7e20d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431790024080387,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d3cf5755b68babda7130ba0257eb989606e750166a172e0479bf321490a7d0,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694431790116677406,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c9
3e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b213e56c71475baea8039114d7536c5736ca4006ed3de51c07170b9b18e2d1e,PodSandboxId:b01b6b3c5ecaec3319765e77aef661beb4e6ed5a246af41c4729b02696d54b2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431781993089957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Annot
ations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e812b362e135991fb4e68939d03f7b8e4761664a78ce073ddea73488005cacd4,PodSandboxId:34a875945d75034bb17b3d0db683d06996544981b179037c57b361d1d8c1f249,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431781844017662,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.has
h: 91e53050,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44ede6cdf606d4bf867fe80627ae2d9e19a86baeebd710d446f2e2e92932f58,PodSandboxId:b9ec844a99bbe4a0934f0e67560ffa11e912ea116845b410dc6d80eabc681593,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431781592652052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe332096,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ceefa9b07e3570a1fd811479be123ba8746d68b467f74d672ed5907bd9d2c81,PodSandboxId:7781db287d00dc84f2644b73aeb55a1552ba267481bf79c25ae2ce444f17206c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431781454088306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3cba63b8-a9b2-4562-85e4-4c172d3737a7 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.359436761Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=18d1f4bf-768c-496d-a3a4-c0b1220a25cf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.359529449Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=18d1f4bf-768c-496d-a3a4-c0b1220a25cf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.359754005Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4adee9f1f5ee42078c0916641a82603b08e4b3838748b58f270ded35fba84e4c,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431820653805489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e558ca4ad4c98ce433905e9be56903b97b3c5378ff29ad9a187a629a72257f55,PodSandboxId:18cbb830d03f1ba9e04ccda74d079644e73ab4cfbd35d760a3a2d587f49a7431,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431805393303376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff221444babdd468400f75b0ae80b79f207416fa0c0fa6def2d4656562ce0335,PodSandboxId:bdb6df1ab0dcc5206a40f6e62da2713b358d469bbe8f6f2a496fad16e73a20a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431803881548269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8da81bd55a5d80bb46eae224860e0cc6ca0a8a546263589c87c44ff7d177037,PodSandboxId:98a44ed64c58f89dcce68253d869763ac511077bd7f130d8d789cee7d4beb7ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431791847594782,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d6a07ff91f1915c2fcd5c31e79af6ede720d3d5b0fee43d62d37e09192d64d,PodSandboxId:5731b74f39101086bf1067f790937d6aafda4faa71d780c9a04ad0201d7e20d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431790024080387,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d3cf5755b68babda7130ba0257eb989606e750166a172e0479bf321490a7d0,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694431790116677406,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c9
3e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b213e56c71475baea8039114d7536c5736ca4006ed3de51c07170b9b18e2d1e,PodSandboxId:b01b6b3c5ecaec3319765e77aef661beb4e6ed5a246af41c4729b02696d54b2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431781993089957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Annot
ations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e812b362e135991fb4e68939d03f7b8e4761664a78ce073ddea73488005cacd4,PodSandboxId:34a875945d75034bb17b3d0db683d06996544981b179037c57b361d1d8c1f249,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431781844017662,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.has
h: 91e53050,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44ede6cdf606d4bf867fe80627ae2d9e19a86baeebd710d446f2e2e92932f58,PodSandboxId:b9ec844a99bbe4a0934f0e67560ffa11e912ea116845b410dc6d80eabc681593,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431781592652052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe332096,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ceefa9b07e3570a1fd811479be123ba8746d68b467f74d672ed5907bd9d2c81,PodSandboxId:7781db287d00dc84f2644b73aeb55a1552ba267481bf79c25ae2ce444f17206c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431781454088306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=18d1f4bf-768c-496d-a3a4-c0b1220a25cf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.401313657Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5a7c4397-6318-480d-b3b6-c77b3e185893 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.401377439Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5a7c4397-6318-480d-b3b6-c77b3e185893 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.401594803Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4adee9f1f5ee42078c0916641a82603b08e4b3838748b58f270ded35fba84e4c,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431820653805489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e558ca4ad4c98ce433905e9be56903b97b3c5378ff29ad9a187a629a72257f55,PodSandboxId:18cbb830d03f1ba9e04ccda74d079644e73ab4cfbd35d760a3a2d587f49a7431,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431805393303376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff221444babdd468400f75b0ae80b79f207416fa0c0fa6def2d4656562ce0335,PodSandboxId:bdb6df1ab0dcc5206a40f6e62da2713b358d469bbe8f6f2a496fad16e73a20a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431803881548269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8da81bd55a5d80bb46eae224860e0cc6ca0a8a546263589c87c44ff7d177037,PodSandboxId:98a44ed64c58f89dcce68253d869763ac511077bd7f130d8d789cee7d4beb7ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431791847594782,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d6a07ff91f1915c2fcd5c31e79af6ede720d3d5b0fee43d62d37e09192d64d,PodSandboxId:5731b74f39101086bf1067f790937d6aafda4faa71d780c9a04ad0201d7e20d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431790024080387,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d3cf5755b68babda7130ba0257eb989606e750166a172e0479bf321490a7d0,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694431790116677406,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c9
3e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b213e56c71475baea8039114d7536c5736ca4006ed3de51c07170b9b18e2d1e,PodSandboxId:b01b6b3c5ecaec3319765e77aef661beb4e6ed5a246af41c4729b02696d54b2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431781993089957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Annot
ations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e812b362e135991fb4e68939d03f7b8e4761664a78ce073ddea73488005cacd4,PodSandboxId:34a875945d75034bb17b3d0db683d06996544981b179037c57b361d1d8c1f249,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431781844017662,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.has
h: 91e53050,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44ede6cdf606d4bf867fe80627ae2d9e19a86baeebd710d446f2e2e92932f58,PodSandboxId:b9ec844a99bbe4a0934f0e67560ffa11e912ea116845b410dc6d80eabc681593,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431781592652052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe332096,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ceefa9b07e3570a1fd811479be123ba8746d68b467f74d672ed5907bd9d2c81,PodSandboxId:7781db287d00dc84f2644b73aeb55a1552ba267481bf79c25ae2ce444f17206c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431781454088306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5a7c4397-6318-480d-b3b6-c77b3e185893 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.446838401Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cbda59a9-27cd-4cf4-bdaf-8eff3a7d8b5f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.446996942Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cbda59a9-27cd-4cf4-bdaf-8eff3a7d8b5f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:33:29 multinode-378707 crio[720]: time="2023-09-11 11:33:29.447228802Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:4adee9f1f5ee42078c0916641a82603b08e4b3838748b58f270ded35fba84e4c,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694431820653805489,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c93e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 2,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e558ca4ad4c98ce433905e9be56903b97b3c5378ff29ad9a187a629a72257f55,PodSandboxId:18cbb830d03f1ba9e04ccda74d079644e73ab4cfbd35d760a3a2d587f49a7431,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,State:CONTAINER_RUNNING,CreatedAt:1694431805393303376,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5bc68d56bd-4jnst,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6e7ad0e9-a68b-4dab-a3bd-c91300933bb8,},Annotations:map[string]string{io.kubernetes.container.hash: 1bd890e7,io.kubernetes.container.restartCount: 1,io.kubern
etes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff221444babdd468400f75b0ae80b79f207416fa0c0fa6def2d4656562ce0335,PodSandboxId:bdb6df1ab0dcc5206a40f6e62da2713b358d469bbe8f6f2a496fad16e73a20a0,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694431803881548269,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-fzpjk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f72f6ba0-92a3-4108-a37f-e6ad5009c37c,},Annotations:map[string]string{io.kubernetes.container.hash: b54844e5,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"prot
ocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c8da81bd55a5d80bb46eae224860e0cc6ca0a8a546263589c87c44ff7d177037,PodSandboxId:98a44ed64c58f89dcce68253d869763ac511077bd7f130d8d789cee7d4beb7ea,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da,Annotations:map[string]string{},},ImageRef:docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974,State:CONTAINER_RUNNING,CreatedAt:1694431791847594782,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-gxpnd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: e59da67c-e818-45db-bbcd-db99a4310bf1,},Annotations:map[string]string{io.kubernetes.container.hash: 9738ab6e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c9d6a07ff91f1915c2fcd5c31e79af6ede720d3d5b0fee43d62d37e09192d64d,PodSandboxId:5731b74f39101086bf1067f790937d6aafda4faa71d780c9a04ad0201d7e20d5,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694431790024080387,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-snbc8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3bb9995-3cd6-4433-a326-3da0a7
f4aff3,},Annotations:map[string]string{io.kubernetes.container.hash: ba4298fa,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b9d3cf5755b68babda7130ba0257eb989606e750166a172e0479bf321490a7d0,PodSandboxId:29da3b21e26859e63f5aa78e0997bc255eb16433fe9bead935a7a483f0bf86bf,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694431790116677406,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 77e1a93d-fc34-4f05-8320-169bb6c9
3e46,},Annotations:map[string]string{io.kubernetes.container.hash: 812bca4e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4b213e56c71475baea8039114d7536c5736ca4006ed3de51c07170b9b18e2d1e,PodSandboxId:b01b6b3c5ecaec3319765e77aef661beb4e6ed5a246af41c4729b02696d54b2c,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694431781993089957,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 47ac46ded21e848957a0f2d3767001da,},Annot
ations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e812b362e135991fb4e68939d03f7b8e4761664a78ce073ddea73488005cacd4,PodSandboxId:34a875945d75034bb17b3d0db683d06996544981b179037c57b361d1d8c1f249,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694431781844017662,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 301ff3085dd9ceb3eda8ae352974f3c3,},Annotations:map[string]string{io.kubernetes.container.has
h: 91e53050,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d44ede6cdf606d4bf867fe80627ae2d9e19a86baeebd710d446f2e2e92932f58,PodSandboxId:b9ec844a99bbe4a0934f0e67560ffa11e912ea116845b410dc6d80eabc681593,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694431781592652052,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4ac3958118ce3f6e7dda52fe654787ec,},Annotations:map[string]string{io.kubernetes.container.hash: fe332096,
io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4ceefa9b07e3570a1fd811479be123ba8746d68b467f74d672ed5907bd9d2c81,PodSandboxId:7781db287d00dc84f2644b73aeb55a1552ba267481bf79c25ae2ce444f17206c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694431781454088306,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-378707,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ee5490370c5fc8b73824fd7337130039,},Annotations:map[string]string{io.kubernetes.c
ontainer.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cbda59a9-27cd-4cf4-bdaf-8eff3a7d8b5f name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	4adee9f1f5ee4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       2                   29da3b21e2685
	e558ca4ad4c98       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   1                   18cbb830d03f1
	ff221444babdd       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   bdb6df1ab0dcc
	c8da81bd55a5d       b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da                                      3 minutes ago       Running             kindnet-cni               1                   98a44ed64c58f
	b9d3cf5755b68       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       1                   29da3b21e2685
	c9d6a07ff91f1       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      3 minutes ago       Running             kube-proxy                1                   5731b74f39101
	4b213e56c7147       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      3 minutes ago       Running             kube-scheduler            1                   b01b6b3c5ecae
	e812b362e1359       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   34a875945d750
	d44ede6cdf606       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      3 minutes ago       Running             kube-apiserver            1                   b9ec844a99bbe
	4ceefa9b07e35       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      3 minutes ago       Running             kube-controller-manager   1                   7781db287d00d
	
	* 
	* ==> coredns [ff221444babdd468400f75b0ae80b79f207416fa0c0fa6def2d4656562ce0335] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:38678 - 59161 "HINFO IN 712297025414811050.6479353292485695363. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.013082149s
	
	* 
	* ==> describe nodes <==
	* Name:               multinode-378707
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-378707
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=multinode-378707
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_19_23_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:19:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-378707
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:33:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:30:18 +0000   Mon, 11 Sep 2023 11:19:16 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:30:18 +0000   Mon, 11 Sep 2023 11:19:16 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:30:18 +0000   Mon, 11 Sep 2023 11:19:16 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:30:18 +0000   Mon, 11 Sep 2023 11:29:57 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.237
	  Hostname:    multinode-378707
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 6530470fdaf445d7b75a40804cd959a7
	  System UUID:                6530470f-daf4-45d7-b75a-40804cd959a7
	  Boot ID:                    612baa4b-458e-43b1-ad44-dbe161b419bf
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-4jnst                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 coredns-5dd5756b68-fzpjk                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-multinode-378707                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-gxpnd                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-multinode-378707             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-multinode-378707    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-snbc8                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-multinode-378707             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 3m39s                  kube-proxy       
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node multinode-378707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node multinode-378707 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node multinode-378707 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 14m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     14m                    kubelet          Node multinode-378707 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    14m                    kubelet          Node multinode-378707 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m                    kubelet          Node multinode-378707 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           13m                    node-controller  Node multinode-378707 event: Registered Node multinode-378707 in Controller
	  Normal  NodeReady                13m                    kubelet          Node multinode-378707 status is now: NodeReady
	  Normal  Starting                 3m49s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  3m49s (x8 over 3m49s)  kubelet          Node multinode-378707 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m49s (x8 over 3m49s)  kubelet          Node multinode-378707 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m49s (x7 over 3m49s)  kubelet          Node multinode-378707 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  3m49s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           3m29s                  node-controller  Node multinode-378707 event: Registered Node multinode-378707 in Controller
	
	
	Name:               multinode-378707-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-378707-m02
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:31:44 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-378707-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:33:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:31:44 +0000   Mon, 11 Sep 2023 11:31:44 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:31:44 +0000   Mon, 11 Sep 2023 11:31:44 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:31:44 +0000   Mon, 11 Sep 2023 11:31:44 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:31:44 +0000   Mon, 11 Sep 2023 11:31:44 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.220
	  Hostname:    multinode-378707-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 e9a79f30993549fd8eb95ddb2e1d94fa
	  System UUID:                e9a79f30-9935-49fd-8eb9-5ddb2e1d94fa
	  Boot ID:                    efae247c-51a2-42e8-a0d1-33d44305a36f
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-gqmmh    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8s
	  kube-system                 kindnet-p8h9v               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-8gcxx            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From        Message
	  ----     ------                   ----                   ----        -------
	  Normal   Starting                 13m                    kube-proxy  
	  Normal   Starting                 106s                   kube-proxy  
	  Normal   NodeHasSufficientMemory  13m (x5 over 13m)      kubelet     Node multinode-378707-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m (x5 over 13m)      kubelet     Node multinode-378707-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m (x5 over 13m)      kubelet     Node multinode-378707-m02 status is now: NodeHasSufficientPID
	  Normal   NodeReady                13m                    kubelet     Node multinode-378707-m02 status is now: NodeReady
	  Normal   NodeNotReady             2m51s                  kubelet     Node multinode-378707-m02 status is now: NodeNotReady
	  Warning  ContainerGCFailed        2m14s (x2 over 3m14s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   Starting                 105s                   kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  105s (x2 over 105s)    kubelet     Node multinode-378707-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    105s (x2 over 105s)    kubelet     Node multinode-378707-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     105s (x2 over 105s)    kubelet     Node multinode-378707-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  105s                   kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                105s                   kubelet     Node multinode-378707-m02 status is now: NodeReady
	
	
	Name:               multinode-378707-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-378707-m03
	                    kubernetes.io/os=linux
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:33:25 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-378707-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:33:25 +0000   Mon, 11 Sep 2023 11:33:25 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:33:25 +0000   Mon, 11 Sep 2023 11:33:25 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:33:25 +0000   Mon, 11 Sep 2023 11:33:25 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:33:25 +0000   Mon, 11 Sep 2023 11:33:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.134
	  Hostname:    multinode-378707-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 58ad635ee31f4f3bbd5b7a8e18f7056f
	  System UUID:                58ad635e-e31f-4f3b-bd5b-7a8e18f7056f
	  Boot ID:                    a949b4ff-1fb2-4c2d-a8ee-8019550df888
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5bc68d56bd-xg4bx    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         109s
	  kube-system                 kindnet-lrktz               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-proxy-kwvbm            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                From        Message
	  ----     ------                   ----               ----        -------
	  Normal   Starting                 11m                kube-proxy  
	  Normal   Starting                 12m                kube-proxy  
	  Normal   Starting                 6s                 kube-proxy  
	  Normal   NodeHasNoDiskPressure    12m (x5 over 12m)  kubelet     Node multinode-378707-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m (x5 over 12m)  kubelet     Node multinode-378707-m03 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  12m (x5 over 12m)  kubelet     Node multinode-378707-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeReady                12m                kubelet     Node multinode-378707-m03 status is now: NodeReady
	  Normal   Starting                 11m                kubelet     Starting kubelet.
	  Normal   NodeAllocatableEnforced  11m                kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeNotReady             69s                kubelet     Node multinode-378707-m03 status is now: NodeNotReady
	  Warning  ContainerGCFailed        39s (x2 over 99s)  kubelet     rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   NodeReady                8s (x2 over 11m)   kubelet     Node multinode-378707-m03 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  5s (x4 over 11m)   kubelet     Node multinode-378707-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5s (x4 over 11m)   kubelet     Node multinode-378707-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5s (x4 over 11m)   kubelet     Node multinode-378707-m03 status is now: NodeHasSufficientPID
	  Normal   Starting                 4s                 kubelet     Starting kubelet.
	  Normal   NodeHasSufficientMemory  4s (x2 over 4s)    kubelet     Node multinode-378707-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4s (x2 over 4s)    kubelet     Node multinode-378707-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4s (x2 over 4s)    kubelet     Node multinode-378707-m03 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  4s                 kubelet     Updated Node Allocatable limit across pods
	  Normal   NodeReady                4s                 kubelet     Node multinode-378707-m03 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Sep11 11:29] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000004] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.082646] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.763708] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.648082] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.159877] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.774873] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.342285] systemd-fstab-generator[645]: Ignoring "noauto" for root device
	[  +0.105026] systemd-fstab-generator[656]: Ignoring "noauto" for root device
	[  +0.154768] systemd-fstab-generator[669]: Ignoring "noauto" for root device
	[  +0.121242] systemd-fstab-generator[680]: Ignoring "noauto" for root device
	[  +0.229016] systemd-fstab-generator[704]: Ignoring "noauto" for root device
	[ +16.937423] systemd-fstab-generator[919]: Ignoring "noauto" for root device
	[ +20.086952] kauditd_printk_skb: 18 callbacks suppressed
	
	* 
	* ==> etcd [e812b362e135991fb4e68939d03f7b8e4761664a78ce073ddea73488005cacd4] <==
	* {"level":"info","ts":"2023-09-11T11:29:43.998212Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:29:43.998467Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.237:2380"}
	{"level":"info","ts":"2023-09-11T11:29:43.998499Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.237:2380"}
	{"level":"info","ts":"2023-09-11T11:29:43.999358Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be switched to configuration voters=(4544017535394177214)"}
	{"level":"info","ts":"2023-09-11T11:29:43.999635Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"db2c13b3d7f66f6a","local-member-id":"3f0f97df8a50e0be","added-peer-id":"3f0f97df8a50e0be","added-peer-peer-urls":["https://192.168.39.237:2380"]}
	{"level":"info","ts":"2023-09-11T11:29:43.999761Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"db2c13b3d7f66f6a","local-member-id":"3f0f97df8a50e0be","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:29:43.999816Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:29:45.863046Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-11T11:29:45.863136Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-11T11:29:45.86317Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be received MsgPreVoteResp from 3f0f97df8a50e0be at term 2"}
	{"level":"info","ts":"2023-09-11T11:29:45.863184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be became candidate at term 3"}
	{"level":"info","ts":"2023-09-11T11:29:45.863189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be received MsgVoteResp from 3f0f97df8a50e0be at term 3"}
	{"level":"info","ts":"2023-09-11T11:29:45.863202Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"3f0f97df8a50e0be became leader at term 3"}
	{"level":"info","ts":"2023-09-11T11:29:45.86323Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 3f0f97df8a50e0be elected leader 3f0f97df8a50e0be at term 3"}
	{"level":"info","ts":"2023-09-11T11:29:45.865399Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"3f0f97df8a50e0be","local-member-attributes":"{Name:multinode-378707 ClientURLs:[https://192.168.39.237:2379]}","request-path":"/0/members/3f0f97df8a50e0be/attributes","cluster-id":"db2c13b3d7f66f6a","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:29:45.865399Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:29:45.865741Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:29:45.867044Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.237:2379"}
	{"level":"info","ts":"2023-09-11T11:29:45.867072Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T11:29:45.867135Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T11:29:45.867176Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T11:29:49.572559Z","caller":"traceutil/trace.go:171","msg":"trace[348150957] linearizableReadLoop","detail":"{readStateIndex:867; appliedIndex:866; }","duration":"140.335071ms","start":"2023-09-11T11:29:49.432211Z","end":"2023-09-11T11:29:49.572546Z","steps":["trace[348150957] 'read index received'  (duration: 140.164629ms)","trace[348150957] 'applied index is now lower than readState.Index'  (duration: 169.807µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-11T11:29:49.572786Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"140.558244ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/service-account-controller\" ","response":"range_response_count:1 size:218"}
	{"level":"info","ts":"2023-09-11T11:29:49.572971Z","caller":"traceutil/trace.go:171","msg":"trace[540897154] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/service-account-controller; range_end:; response_count:1; response_revision:814; }","duration":"140.768097ms","start":"2023-09-11T11:29:49.432189Z","end":"2023-09-11T11:29:49.572957Z","steps":["trace[540897154] 'agreement among raft nodes before linearized reading'  (duration: 140.502647ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T11:29:49.573157Z","caller":"traceutil/trace.go:171","msg":"trace[24670933] transaction","detail":"{read_only:false; response_revision:814; number_of_response:1; }","duration":"144.837665ms","start":"2023-09-11T11:29:49.428313Z","end":"2023-09-11T11:29:49.573151Z","steps":["trace[24670933] 'process raft request'  (duration: 144.113455ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  11:33:29 up 4 min,  0 users,  load average: 0.39, 0.29, 0.13
	Linux multinode-378707 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kindnet [c8da81bd55a5d80bb46eae224860e0cc6ca0a8a546263589c87c44ff7d177037] <==
	* I0911 11:32:43.321665       1 main.go:250] Node multinode-378707-m03 has CIDR [10.244.3.0/24] 
	I0911 11:32:53.327741       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0911 11:32:53.328168       1 main.go:227] handling current node
	I0911 11:32:53.328204       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0911 11:32:53.328227       1 main.go:250] Node multinode-378707-m02 has CIDR [10.244.1.0/24] 
	I0911 11:32:53.328344       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0911 11:32:53.328364       1 main.go:250] Node multinode-378707-m03 has CIDR [10.244.3.0/24] 
	I0911 11:33:03.338636       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0911 11:33:03.338690       1 main.go:227] handling current node
	I0911 11:33:03.338701       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0911 11:33:03.338707       1 main.go:250] Node multinode-378707-m02 has CIDR [10.244.1.0/24] 
	I0911 11:33:03.338823       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0911 11:33:03.338828       1 main.go:250] Node multinode-378707-m03 has CIDR [10.244.3.0/24] 
	I0911 11:33:13.353453       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0911 11:33:13.353516       1 main.go:227] handling current node
	I0911 11:33:13.353538       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0911 11:33:13.353544       1 main.go:250] Node multinode-378707-m02 has CIDR [10.244.1.0/24] 
	I0911 11:33:13.353684       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0911 11:33:13.353690       1 main.go:250] Node multinode-378707-m03 has CIDR [10.244.3.0/24] 
	I0911 11:33:23.361347       1 main.go:223] Handling node with IPs: map[192.168.39.237:{}]
	I0911 11:33:23.361416       1 main.go:227] handling current node
	I0911 11:33:23.361432       1 main.go:223] Handling node with IPs: map[192.168.39.220:{}]
	I0911 11:33:23.361441       1 main.go:250] Node multinode-378707-m02 has CIDR [10.244.1.0/24] 
	I0911 11:33:23.361586       1 main.go:223] Handling node with IPs: map[192.168.39.134:{}]
	I0911 11:33:23.361595       1 main.go:250] Node multinode-378707-m03 has CIDR [10.244.3.0/24] 
	
	* 
	* ==> kube-apiserver [d44ede6cdf606d4bf867fe80627ae2d9e19a86baeebd710d446f2e2e92932f58] <==
	* I0911 11:29:47.271399       1 controller.go:80] Starting OpenAPI V3 AggregationController
	I0911 11:29:47.271575       1 gc_controller.go:78] Starting apiserver lease garbage collector
	I0911 11:29:47.284070       1 crdregistration_controller.go:111] Starting crd-autoregister controller
	I0911 11:29:47.284129       1 shared_informer.go:311] Waiting for caches to sync for crd-autoregister
	I0911 11:29:47.440628       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:29:47.463158       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0911 11:29:47.463338       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0911 11:29:47.464518       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0911 11:29:47.469146       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 11:29:47.471033       1 shared_informer.go:318] Caches are synced for configmaps
	I0911 11:29:47.471110       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 11:29:47.484196       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0911 11:29:47.490284       1 aggregator.go:166] initial CRD sync complete...
	I0911 11:29:47.490330       1 autoregister_controller.go:141] Starting autoregister controller
	I0911 11:29:47.490338       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0911 11:29:47.490346       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:29:47.506539       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0911 11:29:48.272959       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0911 11:29:50.215168       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0911 11:29:50.480662       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0911 11:29:50.506185       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0911 11:29:50.618630       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 11:29:50.633458       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0911 11:30:00.066437       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 11:30:00.110061       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-controller-manager [4ceefa9b07e3570a1fd811479be123ba8746d68b467f74d672ed5907bd9d2c81] <==
	* I0911 11:31:44.403760       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-f9d7x" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-f9d7x"
	I0911 11:31:44.424402       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-378707-m02" podCIDRs=["10.244.1.0/24"]
	I0911 11:31:44.476290       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-378707-m02"
	I0911 11:31:45.308461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="55.319µs"
	I0911 11:31:56.596436       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="90.818µs"
	I0911 11:31:57.181268       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="62.811µs"
	I0911 11:31:57.184600       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.05µs"
	I0911 11:32:20.440500       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-378707-m02"
	I0911 11:33:21.249483       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-378707-m02"
	I0911 11:33:21.480832       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: busybox-5bc68d56bd-gqmmh"
	I0911 11:33:21.498365       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="35.341731ms"
	I0911 11:33:21.513211       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="14.691417ms"
	I0911 11:33:21.513498       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="244.066µs"
	I0911 11:33:21.517189       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="60.716µs"
	I0911 11:33:22.836223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="159.88µs"
	I0911 11:33:23.472154       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="15.197852ms"
	I0911 11:33:23.472354       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="75.498µs"
	I0911 11:33:24.494786       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-378707-m02"
	I0911 11:33:25.066977       1 event.go:307] "Event occurred" object="multinode-378707-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-378707-m03 event: Removing Node multinode-378707-m03 from Controller"
	I0911 11:33:25.190732       1 event.go:307] "Event occurred" object="default/busybox-5bc68d56bd-xg4bx" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5bc68d56bd-xg4bx"
	I0911 11:33:25.191117       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-378707-m02"
	I0911 11:33:25.191229       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-378707-m03\" does not exist"
	I0911 11:33:25.224401       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-378707-m03" podCIDRs=["10.244.2.0/24"]
	I0911 11:33:25.329618       1 topologycache.go:231] "Can't get CPU or zone information for node" node="multinode-378707-m02"
	I0911 11:33:26.081931       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5bc68d56bd" duration="114.827µs"
	
	* 
	* ==> kube-proxy [c9d6a07ff91f1915c2fcd5c31e79af6ede720d3d5b0fee43d62d37e09192d64d] <==
	* I0911 11:29:50.491414       1 server_others.go:69] "Using iptables proxy"
	I0911 11:29:50.547523       1 node.go:141] Successfully retrieved node IP: 192.168.39.237
	I0911 11:29:50.686698       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 11:29:50.686785       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 11:29:50.703288       1 server_others.go:152] "Using iptables Proxier"
	I0911 11:29:50.703435       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 11:29:50.704091       1 server.go:846] "Version info" version="v1.28.1"
	I0911 11:29:50.704103       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:29:50.708171       1 config.go:188] "Starting service config controller"
	I0911 11:29:50.708205       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 11:29:50.708232       1 config.go:97] "Starting endpoint slice config controller"
	I0911 11:29:50.708237       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 11:29:50.709194       1 config.go:315] "Starting node config controller"
	I0911 11:29:50.709330       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 11:29:50.808993       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 11:29:50.809097       1 shared_informer.go:318] Caches are synced for service config
	I0911 11:29:50.809425       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [4b213e56c71475baea8039114d7536c5736ca4006ed3de51c07170b9b18e2d1e] <==
	* I0911 11:29:44.401819       1 serving.go:348] Generated self-signed cert in-memory
	W0911 11:29:47.397464       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 11:29:47.397662       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 11:29:47.397775       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 11:29:47.397803       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 11:29:47.451452       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0911 11:29:47.451621       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:29:47.459321       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0911 11:29:47.459455       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:29:47.466530       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0911 11:29:47.470188       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 11:29:47.560626       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 11:29:13 UTC, ends at Mon 2023-09-11 11:33:30 UTC. --
	Sep 11 11:29:51 multinode-378707 kubelet[925]: E0911 11:29:51.119296     925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e7ad0e9-a68b-4dab-a3bd-c91300933bb8-kube-api-access-682vf podName:6e7ad0e9-a68b-4dab-a3bd-c91300933bb8 nodeName:}" failed. No retries permitted until 2023-09-11 11:29:55.11926203 +0000 UTC m=+15.027985267 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-682vf" (UniqueName: "kubernetes.io/projected/6e7ad0e9-a68b-4dab-a3bd-c91300933bb8-kube-api-access-682vf") pod "busybox-5bc68d56bd-4jnst" (UID: "6e7ad0e9-a68b-4dab-a3bd-c91300933bb8") : object "default"/"kube-root-ca.crt" not registered
	Sep 11 11:29:51 multinode-378707 kubelet[925]: E0911 11:29:51.422346     925 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-fzpjk" podUID="f72f6ba0-92a3-4108-a37f-e6ad5009c37c"
	Sep 11 11:29:51 multinode-378707 kubelet[925]: E0911 11:29:51.422514     925 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-4jnst" podUID="6e7ad0e9-a68b-4dab-a3bd-c91300933bb8"
	Sep 11 11:29:53 multinode-378707 kubelet[925]: E0911 11:29:53.423048     925 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-fzpjk" podUID="f72f6ba0-92a3-4108-a37f-e6ad5009c37c"
	Sep 11 11:29:53 multinode-378707 kubelet[925]: E0911 11:29:53.423468     925 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-4jnst" podUID="6e7ad0e9-a68b-4dab-a3bd-c91300933bb8"
	Sep 11 11:29:55 multinode-378707 kubelet[925]: E0911 11:29:55.051059     925 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 11 11:29:55 multinode-378707 kubelet[925]: E0911 11:29:55.051186     925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f72f6ba0-92a3-4108-a37f-e6ad5009c37c-config-volume podName:f72f6ba0-92a3-4108-a37f-e6ad5009c37c nodeName:}" failed. No retries permitted until 2023-09-11 11:30:03.051168728 +0000 UTC m=+22.959891954 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/f72f6ba0-92a3-4108-a37f-e6ad5009c37c-config-volume") pod "coredns-5dd5756b68-fzpjk" (UID: "f72f6ba0-92a3-4108-a37f-e6ad5009c37c") : object "kube-system"/"coredns" not registered
	Sep 11 11:29:55 multinode-378707 kubelet[925]: E0911 11:29:55.152208     925 projected.go:292] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Sep 11 11:29:55 multinode-378707 kubelet[925]: E0911 11:29:55.152267     925 projected.go:198] Error preparing data for projected volume kube-api-access-682vf for pod default/busybox-5bc68d56bd-4jnst: object "default"/"kube-root-ca.crt" not registered
	Sep 11 11:29:55 multinode-378707 kubelet[925]: E0911 11:29:55.152334     925 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e7ad0e9-a68b-4dab-a3bd-c91300933bb8-kube-api-access-682vf podName:6e7ad0e9-a68b-4dab-a3bd-c91300933bb8 nodeName:}" failed. No retries permitted until 2023-09-11 11:30:03.152318252 +0000 UTC m=+23.061041479 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-682vf" (UniqueName: "kubernetes.io/projected/6e7ad0e9-a68b-4dab-a3bd-c91300933bb8-kube-api-access-682vf") pod "busybox-5bc68d56bd-4jnst" (UID: "6e7ad0e9-a68b-4dab-a3bd-c91300933bb8") : object "default"/"kube-root-ca.crt" not registered
	Sep 11 11:29:55 multinode-378707 kubelet[925]: E0911 11:29:55.422196     925 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="default/busybox-5bc68d56bd-4jnst" podUID="6e7ad0e9-a68b-4dab-a3bd-c91300933bb8"
	Sep 11 11:29:55 multinode-378707 kubelet[925]: E0911 11:29:55.422275     925 pod_workers.go:1300] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-5dd5756b68-fzpjk" podUID="f72f6ba0-92a3-4108-a37f-e6ad5009c37c"
	Sep 11 11:30:20 multinode-378707 kubelet[925]: I0911 11:30:20.619705     925 scope.go:117] "RemoveContainer" containerID="b9d3cf5755b68babda7130ba0257eb989606e750166a172e0479bf321490a7d0"
	Sep 11 11:30:40 multinode-378707 kubelet[925]: E0911 11:30:40.440687     925 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 11:30:40 multinode-378707 kubelet[925]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 11:30:40 multinode-378707 kubelet[925]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 11:30:40 multinode-378707 kubelet[925]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 11:31:40 multinode-378707 kubelet[925]: E0911 11:31:40.441762     925 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 11:31:40 multinode-378707 kubelet[925]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 11:31:40 multinode-378707 kubelet[925]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 11:31:40 multinode-378707 kubelet[925]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 11:32:40 multinode-378707 kubelet[925]: E0911 11:32:40.441001     925 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 11:32:40 multinode-378707 kubelet[925]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 11:32:40 multinode-378707 kubelet[925]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 11:32:40 multinode-378707 kubelet[925]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-378707 -n multinode-378707
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-378707 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (690.24s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (143.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 stop
E0911 11:33:47.569770 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:34:15.053404 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
multinode_test.go:314: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-378707 stop: exit status 82 (2m1.245353687s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-378707"  ...
	* Stopping node "multinode-378707"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:316: node stop returned an error. args "out/minikube-linux-amd64 -p multinode-378707 stop": exit status 82
multinode_test.go:320: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 status
multinode_test.go:320: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-378707 status: exit status 3 (18.791541991s)

                                                
                                                
-- stdout --
	multinode-378707
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	multinode-378707-m02
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 11:35:52.997222 2240775 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.237:22: connect: no route to host
	E0911 11:35:52.997271 2240775 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.237:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-378707 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-378707 -n multinode-378707
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p multinode-378707 -n multinode-378707: exit status 3 (3.183183429s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 11:35:56.357233 2240880 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.237:22: connect: no route to host
	E0911 11:35:56.357258 2240880 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.237:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "multinode-378707" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiNode/serial/StopMultiNode (143.22s)

                                                
                                    
x
+
TestPreload (188.27s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-862767 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0911 11:46:22.842824 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:46:50.619856 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-862767 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (1m42.760470116s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-862767 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-862767 image pull gcr.io/k8s-minikube/busybox: (1.164628529s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-862767
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-862767: (9.104041415s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-862767 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio
E0911 11:48:47.569054 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-862767 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=kvm2  --container-runtime=crio: (1m11.871248968s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-862767 image list
preload_test.go:76: Expected to find gcr.io/k8s-minikube/busybox in image list output, instead got 
-- stdout --
	registry.k8s.io/pause:3.7
	registry.k8s.io/kube-scheduler:v1.24.4
	registry.k8s.io/kube-proxy:v1.24.4
	registry.k8s.io/kube-controller-manager:v1.24.4
	registry.k8s.io/kube-apiserver:v1.24.4
	registry.k8s.io/etcd:3.5.3-0
	registry.k8s.io/coredns/coredns:v1.8.6
	k8s.gcr.io/pause:3.7
	k8s.gcr.io/kube-scheduler:v1.24.4
	k8s.gcr.io/kube-proxy:v1.24.4
	k8s.gcr.io/kube-controller-manager:v1.24.4
	k8s.gcr.io/kube-apiserver:v1.24.4
	k8s.gcr.io/etcd:3.5.3-0
	k8s.gcr.io/coredns/coredns:v1.8.6
	gcr.io/k8s-minikube/storage-provisioner:v5
	docker.io/kindest/kindnetd:v20220726-ed811e41

                                                
                                                
-- /stdout --
panic.go:522: *** TestPreload FAILED at 2023-09-11 11:48:50.521637737 +0000 UTC m=+3135.834262638
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-862767 -n test-preload-862767
helpers_test.go:244: <<< TestPreload FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPreload]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-862767 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p test-preload-862767 logs -n 25: (1.211079688s)
helpers_test.go:252: TestPreload logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-378707 ssh -n                                                                 | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n multinode-378707 sudo cat                                       | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | /home/docker/cp-test_multinode-378707-m03_multinode-378707.txt                          |                      |         |         |                     |                     |
	| cp      | multinode-378707 cp multinode-378707-m03:/home/docker/cp-test.txt                       | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m02:/home/docker/cp-test_multinode-378707-m03_multinode-378707-m02.txt |                      |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n                                                                 | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | multinode-378707-m03 sudo cat                                                           |                      |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                      |         |         |                     |                     |
	| ssh     | multinode-378707 ssh -n multinode-378707-m02 sudo cat                                   | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	|         | /home/docker/cp-test_multinode-378707-m03_multinode-378707-m02.txt                      |                      |         |         |                     |                     |
	| node    | multinode-378707 node stop m03                                                          | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:21 UTC |
	| node    | multinode-378707 node start                                                             | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:21 UTC | 11 Sep 23 11:22 UTC |
	|         | m03 --alsologtostderr                                                                   |                      |         |         |                     |                     |
	| node    | list -p multinode-378707                                                                | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:22 UTC |                     |
	| stop    | -p multinode-378707                                                                     | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:22 UTC |                     |
	| start   | -p multinode-378707                                                                     | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:24 UTC | 11 Sep 23 11:33 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	| node    | list -p multinode-378707                                                                | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:33 UTC |                     |
	| node    | multinode-378707 node delete                                                            | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:33 UTC | 11 Sep 23 11:33 UTC |
	|         | m03                                                                                     |                      |         |         |                     |                     |
	| stop    | multinode-378707 stop                                                                   | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:33 UTC |                     |
	| start   | -p multinode-378707                                                                     | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:35 UTC | 11 Sep 23 11:44 UTC |
	|         | --wait=true -v=8                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | list -p multinode-378707                                                                | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:44 UTC |                     |
	| start   | -p multinode-378707-m02                                                                 | multinode-378707-m02 | jenkins | v1.31.2 | 11 Sep 23 11:44 UTC |                     |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| start   | -p multinode-378707-m03                                                                 | multinode-378707-m03 | jenkins | v1.31.2 | 11 Sep 23 11:44 UTC | 11 Sep 23 11:45 UTC |
	|         | --driver=kvm2                                                                           |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| node    | add -p multinode-378707                                                                 | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:45 UTC |                     |
	| delete  | -p multinode-378707-m03                                                                 | multinode-378707-m03 | jenkins | v1.31.2 | 11 Sep 23 11:45 UTC | 11 Sep 23 11:45 UTC |
	| delete  | -p multinode-378707                                                                     | multinode-378707     | jenkins | v1.31.2 | 11 Sep 23 11:45 UTC | 11 Sep 23 11:45 UTC |
	| start   | -p test-preload-862767                                                                  | test-preload-862767  | jenkins | v1.31.2 | 11 Sep 23 11:45 UTC | 11 Sep 23 11:47 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                                                           |                      |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                                                           |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                               |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.24.4                                                            |                      |         |         |                     |                     |
	| image   | test-preload-862767 image pull                                                          | test-preload-862767  | jenkins | v1.31.2 | 11 Sep 23 11:47 UTC | 11 Sep 23 11:47 UTC |
	|         | gcr.io/k8s-minikube/busybox                                                             |                      |         |         |                     |                     |
	| stop    | -p test-preload-862767                                                                  | test-preload-862767  | jenkins | v1.31.2 | 11 Sep 23 11:47 UTC | 11 Sep 23 11:47 UTC |
	| start   | -p test-preload-862767                                                                  | test-preload-862767  | jenkins | v1.31.2 | 11 Sep 23 11:47 UTC | 11 Sep 23 11:48 UTC |
	|         | --memory=2200                                                                           |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                  |                      |         |         |                     |                     |
	|         | --wait=true --driver=kvm2                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                |                      |         |         |                     |                     |
	| image   | test-preload-862767 image list                                                          | test-preload-862767  | jenkins | v1.31.2 | 11 Sep 23 11:48 UTC | 11 Sep 23 11:48 UTC |
	|---------|-----------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:47:38
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:47:38.465042 2243792 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:47:38.465199 2243792 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:47:38.465211 2243792 out.go:309] Setting ErrFile to fd 2...
	I0911 11:47:38.465218 2243792 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:47:38.465472 2243792 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:47:38.466136 2243792 out.go:303] Setting JSON to false
	I0911 11:47:38.467088 2243792 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":235809,"bootTime":1694197049,"procs":193,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:47:38.467161 2243792 start.go:138] virtualization: kvm guest
	I0911 11:47:38.470035 2243792 out.go:177] * [test-preload-862767] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:47:38.471610 2243792 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:47:38.471709 2243792 notify.go:220] Checking for updates...
	I0911 11:47:38.473314 2243792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:47:38.475091 2243792 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:47:38.476654 2243792 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:47:38.478226 2243792 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:47:38.479857 2243792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:47:38.482513 2243792 config.go:182] Loaded profile config "test-preload-862767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0911 11:47:38.482922 2243792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:47:38.482988 2243792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:47:38.498103 2243792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38979
	I0911 11:47:38.498538 2243792 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:47:38.499136 2243792 main.go:141] libmachine: Using API Version  1
	I0911 11:47:38.499158 2243792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:47:38.499533 2243792 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:47:38.499762 2243792 main.go:141] libmachine: (test-preload-862767) Calling .DriverName
	I0911 11:47:38.502020 2243792 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0911 11:47:38.503525 2243792 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:47:38.503879 2243792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:47:38.503927 2243792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:47:38.519014 2243792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42171
	I0911 11:47:38.519496 2243792 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:47:38.520024 2243792 main.go:141] libmachine: Using API Version  1
	I0911 11:47:38.520050 2243792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:47:38.520419 2243792 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:47:38.520797 2243792 main.go:141] libmachine: (test-preload-862767) Calling .DriverName
	I0911 11:47:38.558867 2243792 out.go:177] * Using the kvm2 driver based on existing profile
	I0911 11:47:38.560430 2243792 start.go:298] selected driver: kvm2
	I0911 11:47:38.560466 2243792 start.go:902] validating driver "kvm2" against &{Name:test-preload-862767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConf
ig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-862767 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host M
ount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:47:38.560662 2243792 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:47:38.561592 2243792 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:47:38.561682 2243792 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 11:47:38.577898 2243792 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 11:47:38.578230 2243792 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 11:47:38.578264 2243792 cni.go:84] Creating CNI manager for ""
	I0911 11:47:38.578271 2243792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 11:47:38.578289 2243792 start_flags.go:321] config:
	{Name:test-preload-862767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.24.4 ClusterName:test-preload-862767 Namespace:defaul
t APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:47:38.578446 2243792 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:47:38.580650 2243792 out.go:177] * Starting control plane node test-preload-862767 in cluster test-preload-862767
	I0911 11:47:38.582147 2243792 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0911 11:47:38.604837 2243792 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0911 11:47:38.604888 2243792 cache.go:57] Caching tarball of preloaded images
	I0911 11:47:38.605087 2243792 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0911 11:47:38.607195 2243792 out.go:177] * Downloading Kubernetes v1.24.4 preload ...
	I0911 11:47:38.608806 2243792 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0911 11:47:38.639131 2243792 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.24.4/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4?checksum=md5:b2ee0ab83ed99f9e7ff71cb0cf27e8f9 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4
	I0911 11:47:42.184591 2243792 preload.go:249] saving checksum for preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0911 11:47:42.184704 2243792 preload.go:256] verifying checksum of /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 ...
	I0911 11:47:43.075790 2243792 cache.go:60] Finished verifying existence of preloaded tar for  v1.24.4 on crio
	I0911 11:47:43.075980 2243792 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/config.json ...
	I0911 11:47:43.076219 2243792 start.go:365] acquiring machines lock for test-preload-862767: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 11:47:43.076294 2243792 start.go:369] acquired machines lock for "test-preload-862767" in 51.926µs
	I0911 11:47:43.076308 2243792 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:47:43.076317 2243792 fix.go:54] fixHost starting: 
	I0911 11:47:43.076608 2243792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:47:43.076645 2243792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:47:43.092190 2243792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43363
	I0911 11:47:43.092730 2243792 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:47:43.093284 2243792 main.go:141] libmachine: Using API Version  1
	I0911 11:47:43.093316 2243792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:47:43.093690 2243792 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:47:43.093931 2243792 main.go:141] libmachine: (test-preload-862767) Calling .DriverName
	I0911 11:47:43.094066 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetState
	I0911 11:47:43.096029 2243792 fix.go:102] recreateIfNeeded on test-preload-862767: state=Stopped err=<nil>
	I0911 11:47:43.096079 2243792 main.go:141] libmachine: (test-preload-862767) Calling .DriverName
	W0911 11:47:43.096281 2243792 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:47:43.098908 2243792 out.go:177] * Restarting existing kvm2 VM for "test-preload-862767" ...
	I0911 11:47:43.100695 2243792 main.go:141] libmachine: (test-preload-862767) Calling .Start
	I0911 11:47:43.100982 2243792 main.go:141] libmachine: (test-preload-862767) Ensuring networks are active...
	I0911 11:47:43.102033 2243792 main.go:141] libmachine: (test-preload-862767) Ensuring network default is active
	I0911 11:47:43.102408 2243792 main.go:141] libmachine: (test-preload-862767) Ensuring network mk-test-preload-862767 is active
	I0911 11:47:43.102817 2243792 main.go:141] libmachine: (test-preload-862767) Getting domain xml...
	I0911 11:47:43.103775 2243792 main.go:141] libmachine: (test-preload-862767) Creating domain...
	I0911 11:47:44.374986 2243792 main.go:141] libmachine: (test-preload-862767) Waiting to get IP...
	I0911 11:47:44.375957 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:44.376353 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:44.376453 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:44.376353 2243839 retry.go:31] will retry after 233.912351ms: waiting for machine to come up
	I0911 11:47:44.612009 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:44.612426 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:44.612455 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:44.612368 2243839 retry.go:31] will retry after 351.810998ms: waiting for machine to come up
	I0911 11:47:44.966160 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:44.966593 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:44.966626 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:44.966532 2243839 retry.go:31] will retry after 420.301931ms: waiting for machine to come up
	I0911 11:47:45.388288 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:45.388826 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:45.388857 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:45.388761 2243839 retry.go:31] will retry after 508.367842ms: waiting for machine to come up
	I0911 11:47:45.899175 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:45.899829 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:45.899861 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:45.899739 2243839 retry.go:31] will retry after 723.377122ms: waiting for machine to come up
	I0911 11:47:46.624906 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:46.625385 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:46.625417 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:46.625325 2243839 retry.go:31] will retry after 831.87389ms: waiting for machine to come up
	I0911 11:47:47.458553 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:47.458967 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:47.458993 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:47.458908 2243839 retry.go:31] will retry after 964.979888ms: waiting for machine to come up
	I0911 11:47:48.425697 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:48.426194 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:48.426234 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:48.426129 2243839 retry.go:31] will retry after 1.427702252s: waiting for machine to come up
	I0911 11:47:49.855882 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:49.856314 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:49.856342 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:49.856272 2243839 retry.go:31] will retry after 1.512687695s: waiting for machine to come up
	I0911 11:47:51.370639 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:51.371088 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:51.371120 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:51.371032 2243839 retry.go:31] will retry after 2.114958514s: waiting for machine to come up
	I0911 11:47:53.488138 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:53.488621 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:53.488648 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:53.488571 2243839 retry.go:31] will retry after 1.943737909s: waiting for machine to come up
	I0911 11:47:55.434863 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:55.435391 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:55.435421 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:55.435355 2243839 retry.go:31] will retry after 3.569460365s: waiting for machine to come up
	I0911 11:47:59.006551 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:47:59.007147 2243792 main.go:141] libmachine: (test-preload-862767) DBG | unable to find current IP address of domain test-preload-862767 in network mk-test-preload-862767
	I0911 11:47:59.007176 2243792 main.go:141] libmachine: (test-preload-862767) DBG | I0911 11:47:59.007089 2243839 retry.go:31] will retry after 3.60811357s: waiting for machine to come up
	I0911 11:48:02.620240 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.620784 2243792 main.go:141] libmachine: (test-preload-862767) Found IP for machine: 192.168.39.144
	I0911 11:48:02.620832 2243792 main.go:141] libmachine: (test-preload-862767) Reserving static IP address...
	I0911 11:48:02.620857 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has current primary IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.621297 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "test-preload-862767", mac: "52:54:00:c8:38:88", ip: "192.168.39.144"} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:02.621327 2243792 main.go:141] libmachine: (test-preload-862767) Reserved static IP address: 192.168.39.144
	I0911 11:48:02.621347 2243792 main.go:141] libmachine: (test-preload-862767) DBG | skip adding static IP to network mk-test-preload-862767 - found existing host DHCP lease matching {name: "test-preload-862767", mac: "52:54:00:c8:38:88", ip: "192.168.39.144"}
	I0911 11:48:02.621361 2243792 main.go:141] libmachine: (test-preload-862767) Waiting for SSH to be available...
	I0911 11:48:02.621372 2243792 main.go:141] libmachine: (test-preload-862767) DBG | Getting to WaitForSSH function...
	I0911 11:48:02.623697 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.624025 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:02.624090 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.624159 2243792 main.go:141] libmachine: (test-preload-862767) DBG | Using SSH client type: external
	I0911 11:48:02.624183 2243792 main.go:141] libmachine: (test-preload-862767) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/test-preload-862767/id_rsa (-rw-------)
	I0911 11:48:02.624217 2243792 main.go:141] libmachine: (test-preload-862767) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.144 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/test-preload-862767/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 11:48:02.624236 2243792 main.go:141] libmachine: (test-preload-862767) DBG | About to run SSH command:
	I0911 11:48:02.624246 2243792 main.go:141] libmachine: (test-preload-862767) DBG | exit 0
	I0911 11:48:02.713655 2243792 main.go:141] libmachine: (test-preload-862767) DBG | SSH cmd err, output: <nil>: 
	I0911 11:48:02.714080 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetConfigRaw
	I0911 11:48:02.714872 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetIP
	I0911 11:48:02.718065 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.718535 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:02.718579 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.718893 2243792 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/config.json ...
	I0911 11:48:02.719172 2243792 machine.go:88] provisioning docker machine ...
	I0911 11:48:02.719201 2243792 main.go:141] libmachine: (test-preload-862767) Calling .DriverName
	I0911 11:48:02.719482 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetMachineName
	I0911 11:48:02.719716 2243792 buildroot.go:166] provisioning hostname "test-preload-862767"
	I0911 11:48:02.719741 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetMachineName
	I0911 11:48:02.719928 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHHostname
	I0911 11:48:02.722451 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.722867 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:02.722893 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.723087 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHPort
	I0911 11:48:02.723310 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:02.723507 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:02.723701 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHUsername
	I0911 11:48:02.723879 2243792 main.go:141] libmachine: Using SSH client type: native
	I0911 11:48:02.724302 2243792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0911 11:48:02.724317 2243792 main.go:141] libmachine: About to run SSH command:
	sudo hostname test-preload-862767 && echo "test-preload-862767" | sudo tee /etc/hostname
	I0911 11:48:02.855468 2243792 main.go:141] libmachine: SSH cmd err, output: <nil>: test-preload-862767
	
	I0911 11:48:02.855503 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHHostname
	I0911 11:48:02.858508 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.858871 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:02.858912 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.859099 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHPort
	I0911 11:48:02.859307 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:02.859524 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:02.859696 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHUsername
	I0911 11:48:02.859893 2243792 main.go:141] libmachine: Using SSH client type: native
	I0911 11:48:02.860305 2243792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0911 11:48:02.860324 2243792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\stest-preload-862767' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 test-preload-862767/g' /etc/hosts;
				else 
					echo '127.0.1.1 test-preload-862767' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:48:02.980419 2243792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:48:02.980461 2243792 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 11:48:02.980501 2243792 buildroot.go:174] setting up certificates
	I0911 11:48:02.980519 2243792 provision.go:83] configureAuth start
	I0911 11:48:02.980533 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetMachineName
	I0911 11:48:02.980932 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetIP
	I0911 11:48:02.984090 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.984524 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:02.984562 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.984930 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHHostname
	I0911 11:48:02.987501 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.987899 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:02.987931 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:02.988055 2243792 provision.go:138] copyHostCerts
	I0911 11:48:02.988127 2243792 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 11:48:02.988137 2243792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:48:02.988203 2243792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 11:48:02.988302 2243792 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 11:48:02.988310 2243792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:48:02.988337 2243792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 11:48:02.988388 2243792 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 11:48:02.988395 2243792 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:48:02.988415 2243792 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 11:48:02.988458 2243792 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.test-preload-862767 san=[192.168.39.144 192.168.39.144 localhost 127.0.0.1 minikube test-preload-862767]
	I0911 11:48:03.103297 2243792 provision.go:172] copyRemoteCerts
	I0911 11:48:03.103364 2243792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:48:03.103396 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHHostname
	I0911 11:48:03.107047 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.107524 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:03.107563 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.107792 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHPort
	I0911 11:48:03.108048 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:03.108230 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHUsername
	I0911 11:48:03.108400 2243792 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/test-preload-862767/id_rsa Username:docker}
	I0911 11:48:03.195529 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0911 11:48:03.223129 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 11:48:03.250047 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:48:03.277980 2243792 provision.go:86] duration metric: configureAuth took 297.439708ms
	I0911 11:48:03.278038 2243792 buildroot.go:189] setting minikube options for container-runtime
	I0911 11:48:03.278298 2243792 config.go:182] Loaded profile config "test-preload-862767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0911 11:48:03.278410 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHHostname
	I0911 11:48:03.281544 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.281940 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:03.281975 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.282147 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHPort
	I0911 11:48:03.282373 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:03.282554 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:03.282710 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHUsername
	I0911 11:48:03.282937 2243792 main.go:141] libmachine: Using SSH client type: native
	I0911 11:48:03.283343 2243792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0911 11:48:03.283358 2243792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:48:03.604835 2243792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:48:03.604868 2243792 machine.go:91] provisioned docker machine in 885.67876ms
	I0911 11:48:03.604882 2243792 start.go:300] post-start starting for "test-preload-862767" (driver="kvm2")
	I0911 11:48:03.604897 2243792 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:48:03.604923 2243792 main.go:141] libmachine: (test-preload-862767) Calling .DriverName
	I0911 11:48:03.605254 2243792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:48:03.605293 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHHostname
	I0911 11:48:03.608330 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.608636 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:03.608672 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.608886 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHPort
	I0911 11:48:03.609102 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:03.609289 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHUsername
	I0911 11:48:03.609445 2243792 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/test-preload-862767/id_rsa Username:docker}
	I0911 11:48:03.696417 2243792 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:48:03.701882 2243792 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 11:48:03.701913 2243792 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 11:48:03.701989 2243792 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 11:48:03.702062 2243792 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 11:48:03.702148 2243792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:48:03.712548 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:48:03.742202 2243792 start.go:303] post-start completed in 137.302104ms
	I0911 11:48:03.742234 2243792 fix.go:56] fixHost completed within 20.665916744s
	I0911 11:48:03.742258 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHHostname
	I0911 11:48:03.746176 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.746691 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:03.746731 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.747046 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHPort
	I0911 11:48:03.747286 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:03.747486 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:03.747649 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHUsername
	I0911 11:48:03.747950 2243792 main.go:141] libmachine: Using SSH client type: native
	I0911 11:48:03.748647 2243792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.144 22 <nil> <nil>}
	I0911 11:48:03.748666 2243792 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 11:48:03.862422 2243792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694432883.806462778
	
	I0911 11:48:03.862454 2243792 fix.go:206] guest clock: 1694432883.806462778
	I0911 11:48:03.862465 2243792 fix.go:219] Guest: 2023-09-11 11:48:03.806462778 +0000 UTC Remote: 2023-09-11 11:48:03.742237374 +0000 UTC m=+25.315619628 (delta=64.225404ms)
	I0911 11:48:03.862536 2243792 fix.go:190] guest clock delta is within tolerance: 64.225404ms
	I0911 11:48:03.862547 2243792 start.go:83] releasing machines lock for "test-preload-862767", held for 20.78624287s
	I0911 11:48:03.862582 2243792 main.go:141] libmachine: (test-preload-862767) Calling .DriverName
	I0911 11:48:03.862996 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetIP
	I0911 11:48:03.866154 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.866615 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:03.866651 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.866952 2243792 main.go:141] libmachine: (test-preload-862767) Calling .DriverName
	I0911 11:48:03.867659 2243792 main.go:141] libmachine: (test-preload-862767) Calling .DriverName
	I0911 11:48:03.867909 2243792 main.go:141] libmachine: (test-preload-862767) Calling .DriverName
	I0911 11:48:03.868019 2243792 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:48:03.868065 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHHostname
	I0911 11:48:03.868138 2243792 ssh_runner.go:195] Run: cat /version.json
	I0911 11:48:03.868177 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHHostname
	I0911 11:48:03.871192 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.871318 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.871557 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:03.871589 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.871674 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:03.871718 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:03.871781 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHPort
	I0911 11:48:03.871953 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHPort
	I0911 11:48:03.872038 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:03.872121 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:03.872183 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHUsername
	I0911 11:48:03.872289 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHUsername
	I0911 11:48:03.872357 2243792 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/test-preload-862767/id_rsa Username:docker}
	I0911 11:48:03.872396 2243792 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/test-preload-862767/id_rsa Username:docker}
	I0911 11:48:03.978779 2243792 ssh_runner.go:195] Run: systemctl --version
	I0911 11:48:03.985443 2243792 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:48:04.135173 2243792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 11:48:04.142381 2243792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 11:48:04.142463 2243792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:48:04.161447 2243792 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 11:48:04.161479 2243792 start.go:466] detecting cgroup driver to use...
	I0911 11:48:04.161567 2243792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:48:04.176040 2243792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:48:04.189415 2243792 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:48:04.189484 2243792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:48:04.205196 2243792 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:48:04.219359 2243792 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:48:04.328752 2243792 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:48:04.453026 2243792 docker.go:212] disabling docker service ...
	I0911 11:48:04.453106 2243792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:48:04.467621 2243792 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:48:04.481430 2243792 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:48:04.590897 2243792 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:48:04.712009 2243792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:48:04.726202 2243792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:48:04.745517 2243792 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.7" pause image...
	I0911 11:48:04.745593 2243792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.7"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:48:04.756274 2243792 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:48:04.756365 2243792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:48:04.766845 2243792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:48:04.777525 2243792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:48:04.788421 2243792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:48:04.799933 2243792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:48:04.809854 2243792 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 11:48:04.809955 2243792 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 11:48:04.823713 2243792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:48:04.833697 2243792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:48:04.948908 2243792 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:48:05.129444 2243792 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:48:05.129529 2243792 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:48:05.137815 2243792 start.go:534] Will wait 60s for crictl version
	I0911 11:48:05.137899 2243792 ssh_runner.go:195] Run: which crictl
	I0911 11:48:05.142314 2243792 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:48:05.178719 2243792 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 11:48:05.178816 2243792 ssh_runner.go:195] Run: crio --version
	I0911 11:48:05.223695 2243792 ssh_runner.go:195] Run: crio --version
	I0911 11:48:05.275804 2243792 out.go:177] * Preparing Kubernetes v1.24.4 on CRI-O 1.24.1 ...
	I0911 11:48:05.277543 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetIP
	I0911 11:48:05.281185 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:05.281509 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:05.281540 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:05.281846 2243792 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 11:48:05.286372 2243792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:48:05.301500 2243792 preload.go:132] Checking if preload exists for k8s version v1.24.4 and runtime crio
	I0911 11:48:05.301562 2243792 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:48:05.337053 2243792 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0911 11:48:05.337123 2243792 ssh_runner.go:195] Run: which lz4
	I0911 11:48:05.341371 2243792 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 11:48:05.345838 2243792 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 11:48:05.345874 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.24.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (459355427 bytes)
	I0911 11:48:07.385009 2243792 crio.go:444] Took 2.043644 seconds to copy over tarball
	I0911 11:48:07.385102 2243792 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 11:48:10.655403 2243792 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.270258631s)
	I0911 11:48:10.655448 2243792 crio.go:451] Took 3.270404 seconds to extract the tarball
	I0911 11:48:10.655459 2243792 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 11:48:10.700500 2243792 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:48:10.749639 2243792 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.24.4". assuming images are not preloaded.
	I0911 11:48:10.749669 2243792 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.24.4 registry.k8s.io/kube-controller-manager:v1.24.4 registry.k8s.io/kube-scheduler:v1.24.4 registry.k8s.io/kube-proxy:v1.24.4 registry.k8s.io/pause:3.7 registry.k8s.io/etcd:3.5.3-0 registry.k8s.io/coredns/coredns:v1.8.6 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 11:48:10.749736 2243792 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:48:10.749793 2243792 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.3-0
	I0911 11:48:10.749826 2243792 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.24.4
	I0911 11:48:10.749847 2243792 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.24.4
	I0911 11:48:10.749875 2243792 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.8.6
	I0911 11:48:10.749829 2243792 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0911 11:48:10.749802 2243792 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.24.4
	I0911 11:48:10.750013 2243792 image.go:134] retrieving image: registry.k8s.io/pause:3.7
	I0911 11:48:10.751202 2243792 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.24.4
	I0911 11:48:10.751230 2243792 image.go:177] daemon lookup for registry.k8s.io/pause:3.7: Error response from daemon: No such image: registry.k8s.io/pause:3.7
	I0911 11:48:10.751236 2243792 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.24.4
	I0911 11:48:10.751237 2243792 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.24.4
	I0911 11:48:10.751204 2243792 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.24.4: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0911 11:48:10.751295 2243792 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:48:10.751203 2243792 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.3-0
	I0911 11:48:10.751208 2243792 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.8.6: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.8.6
	I0911 11:48:10.923003 2243792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.3-0
	I0911 11:48:10.928162 2243792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.24.4
	I0911 11:48:10.929356 2243792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.7
	I0911 11:48:10.937382 2243792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.24.4
	I0911 11:48:10.940371 2243792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.24.4
	I0911 11:48:10.954464 2243792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.24.4
	I0911 11:48:10.978866 2243792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.8.6
	I0911 11:48:11.027660 2243792 cache_images.go:116] "registry.k8s.io/etcd:3.5.3-0" needs transfer: "registry.k8s.io/etcd:3.5.3-0" does not exist at hash "aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b" in container runtime
	I0911 11:48:11.027713 2243792 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.3-0
	I0911 11:48:11.027781 2243792 ssh_runner.go:195] Run: which crictl
	I0911 11:48:11.050232 2243792 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:48:11.116074 2243792 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.24.4" needs transfer: "registry.k8s.io/kube-proxy:v1.24.4" does not exist at hash "7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7" in container runtime
	I0911 11:48:11.116124 2243792 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.24.4
	I0911 11:48:11.116194 2243792 ssh_runner.go:195] Run: which crictl
	I0911 11:48:11.152673 2243792 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.24.4" needs transfer: "registry.k8s.io/kube-controller-manager:v1.24.4" does not exist at hash "1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48" in container runtime
	I0911 11:48:11.152702 2243792 cache_images.go:116] "registry.k8s.io/pause:3.7" needs transfer: "registry.k8s.io/pause:3.7" does not exist at hash "221177c6082a88ea4f6240ab2450d540955ac6f4d5454f0e15751b653ebda165" in container runtime
	I0911 11:48:11.152727 2243792 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.24.4
	I0911 11:48:11.152739 2243792 cri.go:218] Removing image: registry.k8s.io/pause:3.7
	I0911 11:48:11.152756 2243792 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.24.4" needs transfer: "registry.k8s.io/kube-apiserver:v1.24.4" does not exist at hash "6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d" in container runtime
	I0911 11:48:11.152781 2243792 ssh_runner.go:195] Run: which crictl
	I0911 11:48:11.152781 2243792 ssh_runner.go:195] Run: which crictl
	I0911 11:48:11.152786 2243792 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.24.4
	I0911 11:48:11.152789 2243792 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.24.4" needs transfer: "registry.k8s.io/kube-scheduler:v1.24.4" does not exist at hash "03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9" in container runtime
	I0911 11:48:11.152822 2243792 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.24.4
	I0911 11:48:11.152834 2243792 ssh_runner.go:195] Run: which crictl
	I0911 11:48:11.152858 2243792 ssh_runner.go:195] Run: which crictl
	I0911 11:48:11.181866 2243792 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.8.6" needs transfer: "registry.k8s.io/coredns/coredns:v1.8.6" does not exist at hash "a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03" in container runtime
	I0911 11:48:11.181923 2243792 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.8.6
	I0911 11:48:11.181946 2243792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.3-0
	I0911 11:48:11.181967 2243792 ssh_runner.go:195] Run: which crictl
	I0911 11:48:11.293862 2243792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.24.4
	I0911 11:48:11.294014 2243792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.24.4
	I0911 11:48:11.294044 2243792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.24.4
	I0911 11:48:11.294136 2243792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.7
	I0911 11:48:11.294163 2243792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.24.4
	I0911 11:48:11.294284 2243792 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.8.6
	I0911 11:48:11.294382 2243792 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0
	I0911 11:48:11.294463 2243792 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.3-0
	I0911 11:48:11.378734 2243792 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4
	I0911 11:48:11.378864 2243792 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.24.4
	I0911 11:48:11.414166 2243792 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4
	I0911 11:48:11.414301 2243792 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0911 11:48:11.432996 2243792 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4
	I0911 11:48:11.433015 2243792 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4
	I0911 11:48:11.433085 2243792 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7
	I0911 11:48:11.433119 2243792 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0911 11:48:11.433124 2243792 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0911 11:48:11.433175 2243792 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6
	I0911 11:48:11.433204 2243792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.3-0 (exists)
	I0911 11:48:11.433219 2243792 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.3-0
	I0911 11:48:11.433259 2243792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0
	I0911 11:48:11.433270 2243792 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.7
	I0911 11:48:11.433278 2243792 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.8.6
	I0911 11:48:11.433305 2243792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.24.4 (exists)
	I0911 11:48:11.433337 2243792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.24.4 (exists)
	I0911 11:48:11.445052 2243792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.7 (exists)
	I0911 11:48:11.445179 2243792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.24.4 (exists)
	I0911 11:48:11.445621 2243792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.8.6 (exists)
	I0911 11:48:13.603186 2243792 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.3-0: (2.1698986s)
	I0911 11:48:13.603219 2243792 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.3-0 from cache
	I0911 11:48:13.603256 2243792 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.24.4
	I0911 11:48:13.603281 2243792 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.24.4: (2.170141037s)
	I0911 11:48:13.603308 2243792 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.24.4 (exists)
	I0911 11:48:13.603338 2243792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.24.4
	I0911 11:48:14.553805 2243792 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.24.4 from cache
	I0911 11:48:14.553854 2243792 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0911 11:48:14.553923 2243792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.24.4
	I0911 11:48:15.404856 2243792 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.24.4 from cache
	I0911 11:48:15.404917 2243792 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.7
	I0911 11:48:15.404979 2243792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.7
	I0911 11:48:15.553623 2243792 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.7 from cache
	I0911 11:48:15.553729 2243792 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0911 11:48:15.553807 2243792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.24.4
	I0911 11:48:16.010682 2243792 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.24.4 from cache
	I0911 11:48:16.010743 2243792 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.8.6
	I0911 11:48:16.010808 2243792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.8.6
	I0911 11:48:16.466798 2243792 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.8.6 from cache
	I0911 11:48:16.466859 2243792 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0911 11:48:16.466949 2243792 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.24.4
	I0911 11:48:17.214284 2243792 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.24.4 from cache
	I0911 11:48:17.214340 2243792 cache_images.go:123] Successfully loaded all cached images
	I0911 11:48:17.214347 2243792 cache_images.go:92] LoadImages completed in 6.464665691s
	I0911 11:48:17.214443 2243792 ssh_runner.go:195] Run: crio config
	I0911 11:48:17.288534 2243792 cni.go:84] Creating CNI manager for ""
	I0911 11:48:17.288571 2243792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 11:48:17.288600 2243792 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:48:17.288628 2243792 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.144 APIServerPort:8443 KubernetesVersion:v1.24.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:test-preload-862767 NodeName:test-preload-862767 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.144"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.144 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPo
dPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:48:17.288866 2243792 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.144
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "test-preload-862767"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.144
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.144"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.24.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:48:17.288964 2243792 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.24.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=test-preload-862767 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.144
	
	[Install]
	 config:
	{KubernetesVersion:v1.24.4 ClusterName:test-preload-862767 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:48:17.289026 2243792 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.24.4
	I0911 11:48:17.300007 2243792 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:48:17.300093 2243792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:48:17.310499 2243792 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (379 bytes)
	I0911 11:48:17.328804 2243792 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:48:17.346723 2243792 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2106 bytes)
	I0911 11:48:17.365852 2243792 ssh_runner.go:195] Run: grep 192.168.39.144	control-plane.minikube.internal$ /etc/hosts
	I0911 11:48:17.370390 2243792 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.144	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 11:48:17.383925 2243792 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767 for IP: 192.168.39.144
	I0911 11:48:17.383966 2243792 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:48:17.384137 2243792 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 11:48:17.384177 2243792 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 11:48:17.384244 2243792 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/client.key
	I0911 11:48:17.384304 2243792 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/apiserver.key.4482163a
	I0911 11:48:17.384357 2243792 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/proxy-client.key
	I0911 11:48:17.384475 2243792 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 11:48:17.384507 2243792 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 11:48:17.384518 2243792 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 11:48:17.384542 2243792 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:48:17.384568 2243792 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:48:17.384592 2243792 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 11:48:17.384656 2243792 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:48:17.385468 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:48:17.411493 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 11:48:17.439751 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:48:17.468297 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0911 11:48:17.496334 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:48:17.523542 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 11:48:17.552158 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:48:17.578581 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:48:17.604465 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 11:48:17.630346 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 11:48:17.663655 2243792 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:48:17.690639 2243792 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:48:17.711896 2243792 ssh_runner.go:195] Run: openssl version
	I0911 11:48:17.719392 2243792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 11:48:17.732537 2243792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 11:48:17.738070 2243792 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:48:17.738138 2243792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 11:48:17.744886 2243792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:48:17.757433 2243792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:48:17.770294 2243792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:48:17.776665 2243792 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:48:17.776753 2243792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:48:17.783794 2243792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:48:17.796727 2243792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 11:48:17.811749 2243792 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 11:48:17.818113 2243792 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:48:17.818203 2243792 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 11:48:17.825978 2243792 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 11:48:17.839231 2243792 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:48:17.845220 2243792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 11:48:17.851711 2243792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 11:48:17.858740 2243792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 11:48:17.865404 2243792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 11:48:17.872156 2243792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 11:48:17.879262 2243792 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 11:48:17.885881 2243792 kubeadm.go:404] StartCluster: {Name:test-preload-862767 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVers
ion:v1.24.4 ClusterName:test-preload-862767 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p20
00.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:48:17.885992 2243792 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:48:17.886052 2243792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:48:17.918348 2243792 cri.go:89] found id: ""
	I0911 11:48:17.918429 2243792 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 11:48:17.929421 2243792 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 11:48:17.929451 2243792 kubeadm.go:636] restartCluster start
	I0911 11:48:17.929536 2243792 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 11:48:17.940133 2243792 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:17.940597 2243792 kubeconfig.go:135] verify returned: extract IP: "test-preload-862767" does not appear in /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:48:17.940715 2243792 kubeconfig.go:146] "test-preload-862767" context is missing from /home/jenkins/minikube-integration/17223-2215273/kubeconfig - will repair!
	I0911 11:48:17.941124 2243792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:48:17.941791 2243792 kapi.go:59] client config for test-preload-862767: &rest.Config{Host:"https://192.168.39.144:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:48:17.942961 2243792 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 11:48:17.953703 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:17.953807 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:17.966875 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:17.966899 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:17.966944 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:17.979679 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:18.480588 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:18.480695 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:18.493566 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:18.980568 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:18.980732 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:18.994524 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:19.480085 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:19.480228 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:19.493037 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:19.980732 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:19.980863 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:19.994996 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:20.480653 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:20.480778 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:20.493993 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:20.980249 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:20.980366 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:20.993783 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:21.480512 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:21.480616 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:21.493812 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:21.980538 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:21.980649 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:21.993476 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:22.480049 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:22.480198 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:22.494055 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:22.980940 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:22.981062 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:22.994524 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:23.480080 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:23.480180 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:23.493351 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:23.980449 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:23.980562 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:23.994461 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:24.480036 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:24.480158 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:24.493457 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:24.980032 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:24.980128 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:24.993437 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:25.480006 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:25.480121 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:25.493090 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:25.980700 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:25.980803 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:25.994574 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:26.480089 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:26.480178 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:26.493158 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:26.980693 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:26.980830 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:26.993687 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:27.480233 2243792 api_server.go:166] Checking apiserver status ...
	I0911 11:48:27.480337 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 11:48:27.493728 2243792 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 11:48:27.954596 2243792 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 11:48:27.954660 2243792 kubeadm.go:1128] stopping kube-system containers ...
	I0911 11:48:27.954678 2243792 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 11:48:27.954761 2243792 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:48:27.993896 2243792 cri.go:89] found id: ""
	I0911 11:48:27.993992 2243792 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 11:48:28.012513 2243792 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 11:48:28.024793 2243792 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 11:48:28.024905 2243792 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 11:48:28.035776 2243792 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 11:48:28.035825 2243792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 11:48:28.159802 2243792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 11:48:29.388337 2243792 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.228486983s)
	I0911 11:48:29.388376 2243792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 11:48:29.799423 2243792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 11:48:29.873038 2243792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 11:48:29.968498 2243792 api_server.go:52] waiting for apiserver process to appear ...
	I0911 11:48:29.968604 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:48:29.988351 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:48:30.519954 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:48:31.019456 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:48:31.519509 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:48:32.019497 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:48:32.519564 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:48:32.542781 2243792 api_server.go:72] duration metric: took 2.574282966s to wait for apiserver process to appear ...
	I0911 11:48:32.542812 2243792 api_server.go:88] waiting for apiserver healthz status ...
	I0911 11:48:32.542830 2243792 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0911 11:48:32.543343 2243792 api_server.go:269] stopped: https://192.168.39.144:8443/healthz: Get "https://192.168.39.144:8443/healthz": dial tcp 192.168.39.144:8443: connect: connection refused
	I0911 11:48:32.543379 2243792 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0911 11:48:32.543831 2243792 api_server.go:269] stopped: https://192.168.39.144:8443/healthz: Get "https://192.168.39.144:8443/healthz": dial tcp 192.168.39.144:8443: connect: connection refused
	I0911 11:48:33.044632 2243792 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0911 11:48:37.718004 2243792 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 11:48:37.718057 2243792 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 11:48:37.718071 2243792 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0911 11:48:37.770783 2243792 api_server.go:279] https://192.168.39.144:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 11:48:37.770831 2243792 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 11:48:38.044495 2243792 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0911 11:48:38.051873 2243792 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0911 11:48:38.051910 2243792 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0911 11:48:38.544498 2243792 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0911 11:48:38.551260 2243792 api_server.go:279] https://192.168.39.144:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0911 11:48:38.551307 2243792 api_server.go:103] status: https://192.168.39.144:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0911 11:48:39.044103 2243792 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0911 11:48:39.050741 2243792 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0911 11:48:39.058946 2243792 api_server.go:141] control plane version: v1.24.4
	I0911 11:48:39.058982 2243792 api_server.go:131] duration metric: took 6.516162113s to wait for apiserver health ...
	I0911 11:48:39.058995 2243792 cni.go:84] Creating CNI manager for ""
	I0911 11:48:39.059003 2243792 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 11:48:39.061624 2243792 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 11:48:39.063610 2243792 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 11:48:39.077361 2243792 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 11:48:39.098620 2243792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 11:48:39.108625 2243792 system_pods.go:59] 8 kube-system pods found
	I0911 11:48:39.108666 2243792 system_pods.go:61] "coredns-6d4b75cb6d-f5c6v" [b7277dc6-857d-42fb-9087-661c5b3c05fb] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 11:48:39.108675 2243792 system_pods.go:61] "coredns-6d4b75cb6d-zgsgt" [d37fccb4-5c0a-45e8-9483-63b1f0ad9210] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 11:48:39.108681 2243792 system_pods.go:61] "etcd-test-preload-862767" [0c5acd9c-564a-43c8-8fea-a3a90a37e943] Running
	I0911 11:48:39.108686 2243792 system_pods.go:61] "kube-apiserver-test-preload-862767" [cf4abc5c-71af-4c49-900c-d7dacd9119b9] Running
	I0911 11:48:39.108690 2243792 system_pods.go:61] "kube-controller-manager-test-preload-862767" [9a26b860-e90a-48f1-b68a-70f6c34bd9d3] Running
	I0911 11:48:39.108694 2243792 system_pods.go:61] "kube-proxy-mdwrk" [1a75dbb5-3f86-4988-8b59-5a0a3d7ea584] Running
	I0911 11:48:39.108699 2243792 system_pods.go:61] "kube-scheduler-test-preload-862767" [ee9175d7-5551-46ba-9215-370152e737bc] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 11:48:39.108707 2243792 system_pods.go:61] "storage-provisioner" [473f77f8-304b-4222-96da-f05c82f16f33] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 11:48:39.108713 2243792 system_pods.go:74] duration metric: took 10.064354ms to wait for pod list to return data ...
	I0911 11:48:39.108728 2243792 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:48:39.113308 2243792 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:48:39.113346 2243792 node_conditions.go:123] node cpu capacity is 2
	I0911 11:48:39.113358 2243792 node_conditions.go:105] duration metric: took 4.625163ms to run NodePressure ...
	I0911 11:48:39.113383 2243792 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.24.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 11:48:39.498700 2243792 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 11:48:39.506560 2243792 kubeadm.go:787] kubelet initialised
	I0911 11:48:39.506580 2243792 kubeadm.go:788] duration metric: took 7.846904ms waiting for restarted kubelet to initialise ...
	I0911 11:48:39.506588 2243792 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:48:39.513968 2243792 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-f5c6v" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:39.520304 2243792 pod_ready.go:97] node "test-preload-862767" hosting pod "coredns-6d4b75cb6d-f5c6v" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:39.520332 2243792 pod_ready.go:81] duration metric: took 6.334914ms waiting for pod "coredns-6d4b75cb6d-f5c6v" in "kube-system" namespace to be "Ready" ...
	E0911 11:48:39.520341 2243792 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-862767" hosting pod "coredns-6d4b75cb6d-f5c6v" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:39.520351 2243792 pod_ready.go:78] waiting up to 4m0s for pod "coredns-6d4b75cb6d-zgsgt" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:39.526433 2243792 pod_ready.go:97] node "test-preload-862767" hosting pod "coredns-6d4b75cb6d-zgsgt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:39.526459 2243792 pod_ready.go:81] duration metric: took 6.102274ms waiting for pod "coredns-6d4b75cb6d-zgsgt" in "kube-system" namespace to be "Ready" ...
	E0911 11:48:39.526468 2243792 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-862767" hosting pod "coredns-6d4b75cb6d-zgsgt" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:39.526478 2243792 pod_ready.go:78] waiting up to 4m0s for pod "etcd-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:39.532283 2243792 pod_ready.go:97] node "test-preload-862767" hosting pod "etcd-test-preload-862767" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:39.532316 2243792 pod_ready.go:81] duration metric: took 5.83221ms waiting for pod "etcd-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	E0911 11:48:39.532324 2243792 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-862767" hosting pod "etcd-test-preload-862767" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:39.532334 2243792 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:39.538166 2243792 pod_ready.go:97] node "test-preload-862767" hosting pod "kube-apiserver-test-preload-862767" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:39.538200 2243792 pod_ready.go:81] duration metric: took 5.860359ms waiting for pod "kube-apiserver-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	E0911 11:48:39.538209 2243792 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-862767" hosting pod "kube-apiserver-test-preload-862767" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:39.538221 2243792 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:39.903205 2243792 pod_ready.go:97] node "test-preload-862767" hosting pod "kube-controller-manager-test-preload-862767" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:39.903242 2243792 pod_ready.go:81] duration metric: took 365.01426ms waiting for pod "kube-controller-manager-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	E0911 11:48:39.903251 2243792 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-862767" hosting pod "kube-controller-manager-test-preload-862767" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:39.903265 2243792 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-mdwrk" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:40.305172 2243792 pod_ready.go:97] node "test-preload-862767" hosting pod "kube-proxy-mdwrk" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:40.305206 2243792 pod_ready.go:81] duration metric: took 401.933055ms waiting for pod "kube-proxy-mdwrk" in "kube-system" namespace to be "Ready" ...
	E0911 11:48:40.305217 2243792 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-862767" hosting pod "kube-proxy-mdwrk" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:40.305227 2243792 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:40.703819 2243792 pod_ready.go:97] node "test-preload-862767" hosting pod "kube-scheduler-test-preload-862767" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:40.703851 2243792 pod_ready.go:81] duration metric: took 398.616185ms waiting for pod "kube-scheduler-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	E0911 11:48:40.703863 2243792 pod_ready.go:66] WaitExtra: waitPodCondition: node "test-preload-862767" hosting pod "kube-scheduler-test-preload-862767" in "kube-system" namespace is currently not "Ready" (skipping!): node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:40.703879 2243792 pod_ready.go:38] duration metric: took 1.197278183s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:48:40.703910 2243792 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 11:48:40.716305 2243792 ops.go:34] apiserver oom_adj: -16
	I0911 11:48:40.716334 2243792 kubeadm.go:640] restartCluster took 22.786875342s
	I0911 11:48:40.716347 2243792 kubeadm.go:406] StartCluster complete in 22.830477606s
	I0911 11:48:40.716370 2243792 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:48:40.716457 2243792 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:48:40.717242 2243792 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:48:40.717581 2243792 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.24.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 11:48:40.717693 2243792 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 11:48:40.717834 2243792 config.go:182] Loaded profile config "test-preload-862767": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.24.4
	I0911 11:48:40.717837 2243792 addons.go:69] Setting default-storageclass=true in profile "test-preload-862767"
	I0911 11:48:40.717907 2243792 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "test-preload-862767"
	I0911 11:48:40.717808 2243792 addons.go:69] Setting storage-provisioner=true in profile "test-preload-862767"
	I0911 11:48:40.718034 2243792 addons.go:231] Setting addon storage-provisioner=true in "test-preload-862767"
	W0911 11:48:40.718054 2243792 addons.go:240] addon storage-provisioner should already be in state true
	I0911 11:48:40.718122 2243792 host.go:66] Checking if "test-preload-862767" exists ...
	I0911 11:48:40.718359 2243792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:48:40.718316 2243792 kapi.go:59] client config for test-preload-862767: &rest.Config{Host:"https://192.168.39.144:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:48:40.718562 2243792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:48:40.718668 2243792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:48:40.718720 2243792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:48:40.723126 2243792 kapi.go:248] "coredns" deployment in "kube-system" namespace and "test-preload-862767" context rescaled to 1 replicas
	I0911 11:48:40.723201 2243792 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.144 Port:8443 KubernetesVersion:v1.24.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 11:48:40.725726 2243792 out.go:177] * Verifying Kubernetes components...
	I0911 11:48:40.727438 2243792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:48:40.736489 2243792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37077
	I0911 11:48:40.736603 2243792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41461
	I0911 11:48:40.737135 2243792 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:48:40.737709 2243792 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:48:40.737938 2243792 main.go:141] libmachine: Using API Version  1
	I0911 11:48:40.737978 2243792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:48:40.738335 2243792 main.go:141] libmachine: Using API Version  1
	I0911 11:48:40.738356 2243792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:48:40.738397 2243792 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:48:40.738739 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetState
	I0911 11:48:40.738781 2243792 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:48:40.739484 2243792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:48:40.739550 2243792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:48:40.742205 2243792 kapi.go:59] client config for test-preload-862767: &rest.Config{Host:"https://192.168.39.144:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/client.crt", KeyFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/test-preload-862767/client.key", CAFile:"/home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]
uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1d6c140), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0911 11:48:40.758362 2243792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43277
	I0911 11:48:40.759069 2243792 addons.go:231] Setting addon default-storageclass=true in "test-preload-862767"
	W0911 11:48:40.759095 2243792 addons.go:240] addon default-storageclass should already be in state true
	I0911 11:48:40.759122 2243792 host.go:66] Checking if "test-preload-862767" exists ...
	I0911 11:48:40.759218 2243792 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:48:40.759557 2243792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:48:40.759619 2243792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:48:40.759942 2243792 main.go:141] libmachine: Using API Version  1
	I0911 11:48:40.759970 2243792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:48:40.760621 2243792 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:48:40.760906 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetState
	I0911 11:48:40.762986 2243792 main.go:141] libmachine: (test-preload-862767) Calling .DriverName
	I0911 11:48:40.765554 2243792 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 11:48:40.767429 2243792 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 11:48:40.767453 2243792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 11:48:40.767478 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHHostname
	I0911 11:48:40.771636 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:40.772215 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:40.772255 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:40.772517 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHPort
	I0911 11:48:40.772861 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:40.773064 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHUsername
	I0911 11:48:40.773284 2243792 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/test-preload-862767/id_rsa Username:docker}
	I0911 11:48:40.778852 2243792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41679
	I0911 11:48:40.779492 2243792 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:48:40.780383 2243792 main.go:141] libmachine: Using API Version  1
	I0911 11:48:40.780420 2243792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:48:40.780934 2243792 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:48:40.781491 2243792 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:48:40.781535 2243792 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:48:40.798615 2243792 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44129
	I0911 11:48:40.799244 2243792 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:48:40.799882 2243792 main.go:141] libmachine: Using API Version  1
	I0911 11:48:40.799909 2243792 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:48:40.800297 2243792 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:48:40.800541 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetState
	I0911 11:48:40.802382 2243792 main.go:141] libmachine: (test-preload-862767) Calling .DriverName
	I0911 11:48:40.802776 2243792 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 11:48:40.802808 2243792 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 11:48:40.802833 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHHostname
	I0911 11:48:40.806810 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:40.807361 2243792 main.go:141] libmachine: (test-preload-862767) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c8:38:88", ip: ""} in network mk-test-preload-862767: {Iface:virbr1 ExpiryTime:2023-09-11 12:47:56 +0000 UTC Type:0 Mac:52:54:00:c8:38:88 Iaid: IPaddr:192.168.39.144 Prefix:24 Hostname:test-preload-862767 Clientid:01:52:54:00:c8:38:88}
	I0911 11:48:40.807424 2243792 main.go:141] libmachine: (test-preload-862767) DBG | domain test-preload-862767 has defined IP address 192.168.39.144 and MAC address 52:54:00:c8:38:88 in network mk-test-preload-862767
	I0911 11:48:40.807871 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHPort
	I0911 11:48:40.808129 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHKeyPath
	I0911 11:48:40.808363 2243792 main.go:141] libmachine: (test-preload-862767) Calling .GetSSHUsername
	I0911 11:48:40.808621 2243792 sshutil.go:53] new ssh client: &{IP:192.168.39.144 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/test-preload-862767/id_rsa Username:docker}
	I0911 11:48:40.912176 2243792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 11:48:40.981564 2243792 node_ready.go:35] waiting up to 6m0s for node "test-preload-862767" to be "Ready" ...
	I0911 11:48:40.981983 2243792 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 11:48:40.981984 2243792 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 11:48:42.017244 2243792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.035214796s)
	I0911 11:48:42.017307 2243792 main.go:141] libmachine: Making call to close driver server
	I0911 11:48:42.017331 2243792 main.go:141] libmachine: (test-preload-862767) Calling .Close
	I0911 11:48:42.017247 2243792 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.24.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.105032312s)
	I0911 11:48:42.017401 2243792 main.go:141] libmachine: Making call to close driver server
	I0911 11:48:42.017417 2243792 main.go:141] libmachine: (test-preload-862767) Calling .Close
	I0911 11:48:42.017737 2243792 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:48:42.017758 2243792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:48:42.017838 2243792 main.go:141] libmachine: Making call to close driver server
	I0911 11:48:42.017853 2243792 main.go:141] libmachine: (test-preload-862767) Calling .Close
	I0911 11:48:42.017899 2243792 main.go:141] libmachine: (test-preload-862767) DBG | Closing plugin on server side
	I0911 11:48:42.017926 2243792 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:48:42.017938 2243792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:48:42.017951 2243792 main.go:141] libmachine: Making call to close driver server
	I0911 11:48:42.017962 2243792 main.go:141] libmachine: (test-preload-862767) Calling .Close
	I0911 11:48:42.018105 2243792 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:48:42.018139 2243792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:48:42.018153 2243792 main.go:141] libmachine: Making call to close driver server
	I0911 11:48:42.018168 2243792 main.go:141] libmachine: (test-preload-862767) Calling .Close
	I0911 11:48:42.018308 2243792 main.go:141] libmachine: (test-preload-862767) DBG | Closing plugin on server side
	I0911 11:48:42.018354 2243792 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:48:42.018369 2243792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:48:42.018422 2243792 main.go:141] libmachine: Successfully made call to close driver server
	I0911 11:48:42.018436 2243792 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 11:48:42.020761 2243792 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0911 11:48:42.022722 2243792 addons.go:502] enable addons completed in 1.305032936s: enabled=[storage-provisioner default-storageclass]
	I0911 11:48:43.107924 2243792 node_ready.go:58] node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:45.608566 2243792 node_ready.go:58] node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:47.609502 2243792 node_ready.go:58] node "test-preload-862767" has status "Ready":"False"
	I0911 11:48:48.607266 2243792 node_ready.go:49] node "test-preload-862767" has status "Ready":"True"
	I0911 11:48:48.607291 2243792 node_ready.go:38] duration metric: took 7.625692901s waiting for node "test-preload-862767" to be "Ready" ...
	I0911 11:48:48.607310 2243792 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:48:48.612429 2243792 pod_ready.go:78] waiting up to 6m0s for pod "coredns-6d4b75cb6d-f5c6v" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:48.617513 2243792 pod_ready.go:92] pod "coredns-6d4b75cb6d-f5c6v" in "kube-system" namespace has status "Ready":"True"
	I0911 11:48:48.617534 2243792 pod_ready.go:81] duration metric: took 5.079571ms waiting for pod "coredns-6d4b75cb6d-f5c6v" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:48.617547 2243792 pod_ready.go:78] waiting up to 6m0s for pod "etcd-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:48.622545 2243792 pod_ready.go:92] pod "etcd-test-preload-862767" in "kube-system" namespace has status "Ready":"True"
	I0911 11:48:48.622564 2243792 pod_ready.go:81] duration metric: took 5.011403ms waiting for pod "etcd-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:48.622572 2243792 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:48.627894 2243792 pod_ready.go:92] pod "kube-apiserver-test-preload-862767" in "kube-system" namespace has status "Ready":"True"
	I0911 11:48:48.627913 2243792 pod_ready.go:81] duration metric: took 5.334535ms waiting for pod "kube-apiserver-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:48.627923 2243792 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:48.633340 2243792 pod_ready.go:92] pod "kube-controller-manager-test-preload-862767" in "kube-system" namespace has status "Ready":"True"
	I0911 11:48:48.633360 2243792 pod_ready.go:81] duration metric: took 5.431512ms waiting for pod "kube-controller-manager-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:48.633368 2243792 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-mdwrk" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:49.008438 2243792 pod_ready.go:92] pod "kube-proxy-mdwrk" in "kube-system" namespace has status "Ready":"True"
	I0911 11:48:49.008462 2243792 pod_ready.go:81] duration metric: took 375.089161ms waiting for pod "kube-proxy-mdwrk" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:49.008472 2243792 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:49.408511 2243792 pod_ready.go:92] pod "kube-scheduler-test-preload-862767" in "kube-system" namespace has status "Ready":"True"
	I0911 11:48:49.408535 2243792 pod_ready.go:81] duration metric: took 400.056063ms waiting for pod "kube-scheduler-test-preload-862767" in "kube-system" namespace to be "Ready" ...
	I0911 11:48:49.408544 2243792 pod_ready.go:38] duration metric: took 801.226396ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:48:49.408565 2243792 api_server.go:52] waiting for apiserver process to appear ...
	I0911 11:48:49.408616 2243792 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:48:49.423843 2243792 api_server.go:72] duration metric: took 8.700597711s to wait for apiserver process to appear ...
	I0911 11:48:49.423875 2243792 api_server.go:88] waiting for apiserver healthz status ...
	I0911 11:48:49.423893 2243792 api_server.go:253] Checking apiserver healthz at https://192.168.39.144:8443/healthz ...
	I0911 11:48:49.429929 2243792 api_server.go:279] https://192.168.39.144:8443/healthz returned 200:
	ok
	I0911 11:48:49.430908 2243792 api_server.go:141] control plane version: v1.24.4
	I0911 11:48:49.430937 2243792 api_server.go:131] duration metric: took 7.054788ms to wait for apiserver health ...
	I0911 11:48:49.430949 2243792 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 11:48:49.611481 2243792 system_pods.go:59] 7 kube-system pods found
	I0911 11:48:49.611515 2243792 system_pods.go:61] "coredns-6d4b75cb6d-f5c6v" [b7277dc6-857d-42fb-9087-661c5b3c05fb] Running
	I0911 11:48:49.611520 2243792 system_pods.go:61] "etcd-test-preload-862767" [0c5acd9c-564a-43c8-8fea-a3a90a37e943] Running
	I0911 11:48:49.611524 2243792 system_pods.go:61] "kube-apiserver-test-preload-862767" [cf4abc5c-71af-4c49-900c-d7dacd9119b9] Running
	I0911 11:48:49.611528 2243792 system_pods.go:61] "kube-controller-manager-test-preload-862767" [9a26b860-e90a-48f1-b68a-70f6c34bd9d3] Running
	I0911 11:48:49.611532 2243792 system_pods.go:61] "kube-proxy-mdwrk" [1a75dbb5-3f86-4988-8b59-5a0a3d7ea584] Running
	I0911 11:48:49.611540 2243792 system_pods.go:61] "kube-scheduler-test-preload-862767" [ee9175d7-5551-46ba-9215-370152e737bc] Running
	I0911 11:48:49.611543 2243792 system_pods.go:61] "storage-provisioner" [473f77f8-304b-4222-96da-f05c82f16f33] Running
	I0911 11:48:49.611548 2243792 system_pods.go:74] duration metric: took 180.594252ms to wait for pod list to return data ...
	I0911 11:48:49.611556 2243792 default_sa.go:34] waiting for default service account to be created ...
	I0911 11:48:49.807786 2243792 default_sa.go:45] found service account: "default"
	I0911 11:48:49.807820 2243792 default_sa.go:55] duration metric: took 196.258665ms for default service account to be created ...
	I0911 11:48:49.807830 2243792 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 11:48:50.012412 2243792 system_pods.go:86] 7 kube-system pods found
	I0911 11:48:50.012458 2243792 system_pods.go:89] "coredns-6d4b75cb6d-f5c6v" [b7277dc6-857d-42fb-9087-661c5b3c05fb] Running
	I0911 11:48:50.012468 2243792 system_pods.go:89] "etcd-test-preload-862767" [0c5acd9c-564a-43c8-8fea-a3a90a37e943] Running
	I0911 11:48:50.012474 2243792 system_pods.go:89] "kube-apiserver-test-preload-862767" [cf4abc5c-71af-4c49-900c-d7dacd9119b9] Running
	I0911 11:48:50.012480 2243792 system_pods.go:89] "kube-controller-manager-test-preload-862767" [9a26b860-e90a-48f1-b68a-70f6c34bd9d3] Running
	I0911 11:48:50.012485 2243792 system_pods.go:89] "kube-proxy-mdwrk" [1a75dbb5-3f86-4988-8b59-5a0a3d7ea584] Running
	I0911 11:48:50.012491 2243792 system_pods.go:89] "kube-scheduler-test-preload-862767" [ee9175d7-5551-46ba-9215-370152e737bc] Running
	I0911 11:48:50.012496 2243792 system_pods.go:89] "storage-provisioner" [473f77f8-304b-4222-96da-f05c82f16f33] Running
	I0911 11:48:50.012509 2243792 system_pods.go:126] duration metric: took 204.669658ms to wait for k8s-apps to be running ...
	I0911 11:48:50.012519 2243792 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:48:50.012590 2243792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:48:50.036457 2243792 system_svc.go:56] duration metric: took 23.921011ms WaitForService to wait for kubelet.
	I0911 11:48:50.036497 2243792 kubeadm.go:581] duration metric: took 9.313256818s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:48:50.036523 2243792 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:48:50.209577 2243792 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:48:50.209610 2243792 node_conditions.go:123] node cpu capacity is 2
	I0911 11:48:50.209622 2243792 node_conditions.go:105] duration metric: took 173.093042ms to run NodePressure ...
	I0911 11:48:50.209646 2243792 start.go:228] waiting for startup goroutines ...
	I0911 11:48:50.209653 2243792 start.go:233] waiting for cluster config update ...
	I0911 11:48:50.209666 2243792 start.go:242] writing updated cluster config ...
	I0911 11:48:50.209991 2243792 ssh_runner.go:195] Run: rm -f paused
	I0911 11:48:50.270643 2243792 start.go:600] kubectl: 1.28.1, cluster: 1.24.4 (minor skew: 4)
	I0911 11:48:50.273078 2243792 out.go:177] 
	W0911 11:48:50.274958 2243792 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.24.4.
	I0911 11:48:50.276561 2243792 out.go:177]   - Want kubectl v1.24.4? Try 'minikube kubectl -- get pods -A'
	I0911 11:48:50.278400 2243792 out.go:177] * Done! kubectl is now configured to use "test-preload-862767" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 11:47:56 UTC, ends at Mon 2023-09-11 11:48:51 UTC. --
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.162088499Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:0c04aed6442966128cf1c5ee126610ef5cff23400b29339b51043fc98ca10141,Metadata:&PodSandboxMetadata{Name:coredns-6d4b75cb6d-f5c6v,Uid:b7277dc6-857d-42fb-9087-661c5b3c05fb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694432922192863424,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-6d4b75cb6d-f5c6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7277dc6-857d-42fb-9087-661c5b3c05fb,k8s-app: kube-dns,pod-template-hash: 6d4b75cb6d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:48:37.954212352Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:174b2e1a874459b41388a4d3eb0117032029f6bebf449d5a2cb493bff27582c3,Metadata:&PodSandboxMetadata{Name:kube-proxy-mdwrk,Uid:1a75dbb5-3f86-4988-8b59-5a0a3d7ea584,Namespace:kube-system,A
ttempt:0,},State:SANDBOX_READY,CreatedAt:1694432919229500021,Labels:map[string]string{controller-revision-hash: 6fd4744df8,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-mdwrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a75dbb5-3f86-4988-8b59-5a0a3d7ea584,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:48:37.954218361Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3ba80edaedfbeba8f93dfc0c86d781179dff4f8cf433847859d7fb0e9b3c59a1,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:473f77f8-304b-4222-96da-f05c82f16f33,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694432919199449844,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 473f77f8-304b-4222-96da-f05c
82f16f33,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-11T11:48:37.954194305Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:4d6cc96d6f368db8a0a4738333fc7b43f533d5a59ffac6a8905b7ad764142f4f,Metadata:&PodSandboxMetadata{Name:kube-scheduler-test-preload-862767,Uid:3720945
685e9b45e47ec9c5612e6a2ff,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694432910594432225,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720945685e9b45e47ec9c5612e6a2ff,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 3720945685e9b45e47ec9c5612e6a2ff,kubernetes.io/config.seen: 2023-09-11T11:48:29.949941927Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:033758f025cf4a269cba8470bb47a290bd84159961e8973a32adcc59afeaa7dd,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-test-preload-862767,Uid:cf1267270fd3129ea7882500807aa051,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694432910577911836,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-test-preload-862767,io.kuber
netes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1267270fd3129ea7882500807aa051,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: cf1267270fd3129ea7882500807aa051,kubernetes.io/config.seen: 2023-09-11T11:48:29.949941019Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:52e6dd9cf7807c3f0c4264cb797904910195b0f391718e0cce559527c1deb7a6,Metadata:&PodSandboxMetadata{Name:etcd-test-preload-862767,Uid:d10fbf84e1b2132b84d32f8783794a78,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694432910563184071,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d10fbf84e1b2132b84d32f8783794a78,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.144:2379,kubernetes.io/config.hash: d10fbf84e1b2132b84d32f8783794a78,kubernetes.io/config.seen: 2023-09-11T11
:48:29.949924790Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:e857ccd030eaf998c6212b32747c4a1ae73d375a7b161e6a9cf111439aaff8eb,Metadata:&PodSandboxMetadata{Name:kube-apiserver-test-preload-862767,Uid:46fc55e4592a4cda441e0541f384ee0d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694432910493776574,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46fc55e4592a4cda441e0541f384ee0d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.144:8443,kubernetes.io/config.hash: 46fc55e4592a4cda441e0541f384ee0d,kubernetes.io/config.seen: 2023-09-11T11:48:29.949939814Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=fdcd7f8c-3829-44de-b504-c27696a473b8 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.163151031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=174200b8-51ae-4877-bcd9-773bea3ac59a name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.163205986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=174200b8-51ae-4877-bcd9-773bea3ac59a name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.163398375Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb3c40dd05482430b2e91b2a555dfca092c47f49f826e5f3f28d5766c6124f90,PodSandboxId:0c04aed6442966128cf1c5ee126610ef5cff23400b29339b51043fc98ca10141,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1694432922814999598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-f5c6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7277dc6-857d-42fb-9087-661c5b3c05fb,},Annotations:map[string]string{io.kubernetes.container.hash: 13a446a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25f6ca096a44c2db02dd6dea356acd29c27b744ee7a1d42ce4473ff02408fbc,PodSandboxId:3ba80edaedfbeba8f93dfc0c86d781179dff4f8cf433847859d7fb0e9b3c59a1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694432919963514847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 473f77f8-304b-4222-96da-f05c82f16f33,},Annotations:map[string]string{io.kubernetes.container.hash: f3e2962f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a863f0c2b7075a34d8b017552155d43ce0a825ea9e34c01aee26b6a634ce371a,PodSandboxId:174b2e1a874459b41388a4d3eb0117032029f6bebf449d5a2cb493bff27582c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1694432919935991673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdwrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
a75dbb5-3f86-4988-8b59-5a0a3d7ea584,},Annotations:map[string]string{io.kubernetes.container.hash: c44683b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501170f4bdb58bdf498600f38a5dcfa83cf212c4b19d0653d6e9f40fab8a2523,PodSandboxId:4d6cc96d6f368db8a0a4738333fc7b43f533d5a59ffac6a8905b7ad764142f4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1694432911643199938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720945685e
9b45e47ec9c5612e6a2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd97599e2252c33ba362aac4b7387569b1d688adabe490ac099aa92bdc1927b6,PodSandboxId:52e6dd9cf7807c3f0c4264cb797904910195b0f391718e0cce559527c1deb7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1694432911423802994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d10fbf84e1b2132b84d32f8783794a78,},Annotations:map[string]string{
io.kubernetes.container.hash: 987f095b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dd0d581b8dd1ed616db23fce9ca1a5b83e765cdacf13002b154d4d55007f8,PodSandboxId:033758f025cf4a269cba8470bb47a290bd84159961e8973a32adcc59afeaa7dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1694432911404466402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1267270fd3129ea7882500807aa051,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea2ca3d4c0182ec99d760347cada50c2ccfa530d44bbaa4c384986b977331f8,PodSandboxId:e857ccd030eaf998c6212b32747c4a1ae73d375a7b161e6a9cf111439aaff8eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1694432911237470880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46fc55e4592a4cda441e0541f384ee0d,},Annotations:map[string
]string{io.kubernetes.container.hash: 701f24a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=174200b8-51ae-4877-bcd9-773bea3ac59a name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.173958788Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=64a7b0f8-0655-4022-8023-c0c2c0225040 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.174028548Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=64a7b0f8-0655-4022-8023-c0c2c0225040 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.175678277Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb3c40dd05482430b2e91b2a555dfca092c47f49f826e5f3f28d5766c6124f90,PodSandboxId:0c04aed6442966128cf1c5ee126610ef5cff23400b29339b51043fc98ca10141,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1694432922814999598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-f5c6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7277dc6-857d-42fb-9087-661c5b3c05fb,},Annotations:map[string]string{io.kubernetes.container.hash: 13a446a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25f6ca096a44c2db02dd6dea356acd29c27b744ee7a1d42ce4473ff02408fbc,PodSandboxId:3ba80edaedfbeba8f93dfc0c86d781179dff4f8cf433847859d7fb0e9b3c59a1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694432919963514847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 473f77f8-304b-4222-96da-f05c82f16f33,},Annotations:map[string]string{io.kubernetes.container.hash: f3e2962f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a863f0c2b7075a34d8b017552155d43ce0a825ea9e34c01aee26b6a634ce371a,PodSandboxId:174b2e1a874459b41388a4d3eb0117032029f6bebf449d5a2cb493bff27582c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1694432919935991673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdwrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
a75dbb5-3f86-4988-8b59-5a0a3d7ea584,},Annotations:map[string]string{io.kubernetes.container.hash: c44683b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501170f4bdb58bdf498600f38a5dcfa83cf212c4b19d0653d6e9f40fab8a2523,PodSandboxId:4d6cc96d6f368db8a0a4738333fc7b43f533d5a59ffac6a8905b7ad764142f4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1694432911643199938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720945685e
9b45e47ec9c5612e6a2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd97599e2252c33ba362aac4b7387569b1d688adabe490ac099aa92bdc1927b6,PodSandboxId:52e6dd9cf7807c3f0c4264cb797904910195b0f391718e0cce559527c1deb7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1694432911423802994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d10fbf84e1b2132b84d32f8783794a78,},Annotations:map[string]string{
io.kubernetes.container.hash: 987f095b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dd0d581b8dd1ed616db23fce9ca1a5b83e765cdacf13002b154d4d55007f8,PodSandboxId:033758f025cf4a269cba8470bb47a290bd84159961e8973a32adcc59afeaa7dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1694432911404466402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1267270fd3129ea7882500807aa051,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea2ca3d4c0182ec99d760347cada50c2ccfa530d44bbaa4c384986b977331f8,PodSandboxId:e857ccd030eaf998c6212b32747c4a1ae73d375a7b161e6a9cf111439aaff8eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1694432911237470880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46fc55e4592a4cda441e0541f384ee0d,},Annotations:map[string
]string{io.kubernetes.container.hash: 701f24a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=64a7b0f8-0655-4022-8023-c0c2c0225040 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.231084189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1af0582c-049b-41f8-9d94-94fbe61deb3b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.231176742Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1af0582c-049b-41f8-9d94-94fbe61deb3b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.232403984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb3c40dd05482430b2e91b2a555dfca092c47f49f826e5f3f28d5766c6124f90,PodSandboxId:0c04aed6442966128cf1c5ee126610ef5cff23400b29339b51043fc98ca10141,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1694432922814999598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-f5c6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7277dc6-857d-42fb-9087-661c5b3c05fb,},Annotations:map[string]string{io.kubernetes.container.hash: 13a446a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25f6ca096a44c2db02dd6dea356acd29c27b744ee7a1d42ce4473ff02408fbc,PodSandboxId:3ba80edaedfbeba8f93dfc0c86d781179dff4f8cf433847859d7fb0e9b3c59a1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694432919963514847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 473f77f8-304b-4222-96da-f05c82f16f33,},Annotations:map[string]string{io.kubernetes.container.hash: f3e2962f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a863f0c2b7075a34d8b017552155d43ce0a825ea9e34c01aee26b6a634ce371a,PodSandboxId:174b2e1a874459b41388a4d3eb0117032029f6bebf449d5a2cb493bff27582c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1694432919935991673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdwrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
a75dbb5-3f86-4988-8b59-5a0a3d7ea584,},Annotations:map[string]string{io.kubernetes.container.hash: c44683b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501170f4bdb58bdf498600f38a5dcfa83cf212c4b19d0653d6e9f40fab8a2523,PodSandboxId:4d6cc96d6f368db8a0a4738333fc7b43f533d5a59ffac6a8905b7ad764142f4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1694432911643199938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720945685e
9b45e47ec9c5612e6a2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd97599e2252c33ba362aac4b7387569b1d688adabe490ac099aa92bdc1927b6,PodSandboxId:52e6dd9cf7807c3f0c4264cb797904910195b0f391718e0cce559527c1deb7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1694432911423802994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d10fbf84e1b2132b84d32f8783794a78,},Annotations:map[string]string{
io.kubernetes.container.hash: 987f095b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dd0d581b8dd1ed616db23fce9ca1a5b83e765cdacf13002b154d4d55007f8,PodSandboxId:033758f025cf4a269cba8470bb47a290bd84159961e8973a32adcc59afeaa7dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1694432911404466402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1267270fd3129ea7882500807aa051,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea2ca3d4c0182ec99d760347cada50c2ccfa530d44bbaa4c384986b977331f8,PodSandboxId:e857ccd030eaf998c6212b32747c4a1ae73d375a7b161e6a9cf111439aaff8eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1694432911237470880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46fc55e4592a4cda441e0541f384ee0d,},Annotations:map[string
]string{io.kubernetes.container.hash: 701f24a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1af0582c-049b-41f8-9d94-94fbe61deb3b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.278839746Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2da3b2db-f091-449a-beab-17185dec071b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.278905796Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2da3b2db-f091-449a-beab-17185dec071b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.279206738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb3c40dd05482430b2e91b2a555dfca092c47f49f826e5f3f28d5766c6124f90,PodSandboxId:0c04aed6442966128cf1c5ee126610ef5cff23400b29339b51043fc98ca10141,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1694432922814999598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-f5c6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7277dc6-857d-42fb-9087-661c5b3c05fb,},Annotations:map[string]string{io.kubernetes.container.hash: 13a446a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25f6ca096a44c2db02dd6dea356acd29c27b744ee7a1d42ce4473ff02408fbc,PodSandboxId:3ba80edaedfbeba8f93dfc0c86d781179dff4f8cf433847859d7fb0e9b3c59a1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694432919963514847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 473f77f8-304b-4222-96da-f05c82f16f33,},Annotations:map[string]string{io.kubernetes.container.hash: f3e2962f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a863f0c2b7075a34d8b017552155d43ce0a825ea9e34c01aee26b6a634ce371a,PodSandboxId:174b2e1a874459b41388a4d3eb0117032029f6bebf449d5a2cb493bff27582c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1694432919935991673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdwrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
a75dbb5-3f86-4988-8b59-5a0a3d7ea584,},Annotations:map[string]string{io.kubernetes.container.hash: c44683b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501170f4bdb58bdf498600f38a5dcfa83cf212c4b19d0653d6e9f40fab8a2523,PodSandboxId:4d6cc96d6f368db8a0a4738333fc7b43f533d5a59ffac6a8905b7ad764142f4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1694432911643199938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720945685e
9b45e47ec9c5612e6a2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd97599e2252c33ba362aac4b7387569b1d688adabe490ac099aa92bdc1927b6,PodSandboxId:52e6dd9cf7807c3f0c4264cb797904910195b0f391718e0cce559527c1deb7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1694432911423802994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d10fbf84e1b2132b84d32f8783794a78,},Annotations:map[string]string{
io.kubernetes.container.hash: 987f095b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dd0d581b8dd1ed616db23fce9ca1a5b83e765cdacf13002b154d4d55007f8,PodSandboxId:033758f025cf4a269cba8470bb47a290bd84159961e8973a32adcc59afeaa7dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1694432911404466402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1267270fd3129ea7882500807aa051,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea2ca3d4c0182ec99d760347cada50c2ccfa530d44bbaa4c384986b977331f8,PodSandboxId:e857ccd030eaf998c6212b32747c4a1ae73d375a7b161e6a9cf111439aaff8eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1694432911237470880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46fc55e4592a4cda441e0541f384ee0d,},Annotations:map[string
]string{io.kubernetes.container.hash: 701f24a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2da3b2db-f091-449a-beab-17185dec071b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.324728858Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3d0a8811-54cc-40eb-8834-849eb49f1921 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.324793792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3d0a8811-54cc-40eb-8834-849eb49f1921 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.325018104Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb3c40dd05482430b2e91b2a555dfca092c47f49f826e5f3f28d5766c6124f90,PodSandboxId:0c04aed6442966128cf1c5ee126610ef5cff23400b29339b51043fc98ca10141,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1694432922814999598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-f5c6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7277dc6-857d-42fb-9087-661c5b3c05fb,},Annotations:map[string]string{io.kubernetes.container.hash: 13a446a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25f6ca096a44c2db02dd6dea356acd29c27b744ee7a1d42ce4473ff02408fbc,PodSandboxId:3ba80edaedfbeba8f93dfc0c86d781179dff4f8cf433847859d7fb0e9b3c59a1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694432919963514847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 473f77f8-304b-4222-96da-f05c82f16f33,},Annotations:map[string]string{io.kubernetes.container.hash: f3e2962f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a863f0c2b7075a34d8b017552155d43ce0a825ea9e34c01aee26b6a634ce371a,PodSandboxId:174b2e1a874459b41388a4d3eb0117032029f6bebf449d5a2cb493bff27582c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1694432919935991673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdwrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
a75dbb5-3f86-4988-8b59-5a0a3d7ea584,},Annotations:map[string]string{io.kubernetes.container.hash: c44683b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501170f4bdb58bdf498600f38a5dcfa83cf212c4b19d0653d6e9f40fab8a2523,PodSandboxId:4d6cc96d6f368db8a0a4738333fc7b43f533d5a59ffac6a8905b7ad764142f4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1694432911643199938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720945685e
9b45e47ec9c5612e6a2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd97599e2252c33ba362aac4b7387569b1d688adabe490ac099aa92bdc1927b6,PodSandboxId:52e6dd9cf7807c3f0c4264cb797904910195b0f391718e0cce559527c1deb7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1694432911423802994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d10fbf84e1b2132b84d32f8783794a78,},Annotations:map[string]string{
io.kubernetes.container.hash: 987f095b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dd0d581b8dd1ed616db23fce9ca1a5b83e765cdacf13002b154d4d55007f8,PodSandboxId:033758f025cf4a269cba8470bb47a290bd84159961e8973a32adcc59afeaa7dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1694432911404466402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1267270fd3129ea7882500807aa051,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea2ca3d4c0182ec99d760347cada50c2ccfa530d44bbaa4c384986b977331f8,PodSandboxId:e857ccd030eaf998c6212b32747c4a1ae73d375a7b161e6a9cf111439aaff8eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1694432911237470880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46fc55e4592a4cda441e0541f384ee0d,},Annotations:map[string
]string{io.kubernetes.container.hash: 701f24a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3d0a8811-54cc-40eb-8834-849eb49f1921 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.365327307Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=14e5ff63-f2a3-40e1-920a-8145ba72de2c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.365394394Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=14e5ff63-f2a3-40e1-920a-8145ba72de2c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.365674646Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb3c40dd05482430b2e91b2a555dfca092c47f49f826e5f3f28d5766c6124f90,PodSandboxId:0c04aed6442966128cf1c5ee126610ef5cff23400b29339b51043fc98ca10141,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1694432922814999598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-f5c6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7277dc6-857d-42fb-9087-661c5b3c05fb,},Annotations:map[string]string{io.kubernetes.container.hash: 13a446a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25f6ca096a44c2db02dd6dea356acd29c27b744ee7a1d42ce4473ff02408fbc,PodSandboxId:3ba80edaedfbeba8f93dfc0c86d781179dff4f8cf433847859d7fb0e9b3c59a1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694432919963514847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 473f77f8-304b-4222-96da-f05c82f16f33,},Annotations:map[string]string{io.kubernetes.container.hash: f3e2962f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a863f0c2b7075a34d8b017552155d43ce0a825ea9e34c01aee26b6a634ce371a,PodSandboxId:174b2e1a874459b41388a4d3eb0117032029f6bebf449d5a2cb493bff27582c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1694432919935991673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdwrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
a75dbb5-3f86-4988-8b59-5a0a3d7ea584,},Annotations:map[string]string{io.kubernetes.container.hash: c44683b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501170f4bdb58bdf498600f38a5dcfa83cf212c4b19d0653d6e9f40fab8a2523,PodSandboxId:4d6cc96d6f368db8a0a4738333fc7b43f533d5a59ffac6a8905b7ad764142f4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1694432911643199938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720945685e
9b45e47ec9c5612e6a2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd97599e2252c33ba362aac4b7387569b1d688adabe490ac099aa92bdc1927b6,PodSandboxId:52e6dd9cf7807c3f0c4264cb797904910195b0f391718e0cce559527c1deb7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1694432911423802994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d10fbf84e1b2132b84d32f8783794a78,},Annotations:map[string]string{
io.kubernetes.container.hash: 987f095b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dd0d581b8dd1ed616db23fce9ca1a5b83e765cdacf13002b154d4d55007f8,PodSandboxId:033758f025cf4a269cba8470bb47a290bd84159961e8973a32adcc59afeaa7dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1694432911404466402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1267270fd3129ea7882500807aa051,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea2ca3d4c0182ec99d760347cada50c2ccfa530d44bbaa4c384986b977331f8,PodSandboxId:e857ccd030eaf998c6212b32747c4a1ae73d375a7b161e6a9cf111439aaff8eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1694432911237470880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46fc55e4592a4cda441e0541f384ee0d,},Annotations:map[string
]string{io.kubernetes.container.hash: 701f24a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=14e5ff63-f2a3-40e1-920a-8145ba72de2c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.410784173Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4e8a9df2-d3f8-4e02-906d-3d061ad9be50 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.410862260Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4e8a9df2-d3f8-4e02-906d-3d061ad9be50 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.411068647Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb3c40dd05482430b2e91b2a555dfca092c47f49f826e5f3f28d5766c6124f90,PodSandboxId:0c04aed6442966128cf1c5ee126610ef5cff23400b29339b51043fc98ca10141,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1694432922814999598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-f5c6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7277dc6-857d-42fb-9087-661c5b3c05fb,},Annotations:map[string]string{io.kubernetes.container.hash: 13a446a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25f6ca096a44c2db02dd6dea356acd29c27b744ee7a1d42ce4473ff02408fbc,PodSandboxId:3ba80edaedfbeba8f93dfc0c86d781179dff4f8cf433847859d7fb0e9b3c59a1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694432919963514847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 473f77f8-304b-4222-96da-f05c82f16f33,},Annotations:map[string]string{io.kubernetes.container.hash: f3e2962f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a863f0c2b7075a34d8b017552155d43ce0a825ea9e34c01aee26b6a634ce371a,PodSandboxId:174b2e1a874459b41388a4d3eb0117032029f6bebf449d5a2cb493bff27582c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1694432919935991673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdwrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
a75dbb5-3f86-4988-8b59-5a0a3d7ea584,},Annotations:map[string]string{io.kubernetes.container.hash: c44683b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501170f4bdb58bdf498600f38a5dcfa83cf212c4b19d0653d6e9f40fab8a2523,PodSandboxId:4d6cc96d6f368db8a0a4738333fc7b43f533d5a59ffac6a8905b7ad764142f4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1694432911643199938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720945685e
9b45e47ec9c5612e6a2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd97599e2252c33ba362aac4b7387569b1d688adabe490ac099aa92bdc1927b6,PodSandboxId:52e6dd9cf7807c3f0c4264cb797904910195b0f391718e0cce559527c1deb7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1694432911423802994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d10fbf84e1b2132b84d32f8783794a78,},Annotations:map[string]string{
io.kubernetes.container.hash: 987f095b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dd0d581b8dd1ed616db23fce9ca1a5b83e765cdacf13002b154d4d55007f8,PodSandboxId:033758f025cf4a269cba8470bb47a290bd84159961e8973a32adcc59afeaa7dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1694432911404466402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1267270fd3129ea7882500807aa051,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea2ca3d4c0182ec99d760347cada50c2ccfa530d44bbaa4c384986b977331f8,PodSandboxId:e857ccd030eaf998c6212b32747c4a1ae73d375a7b161e6a9cf111439aaff8eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1694432911237470880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46fc55e4592a4cda441e0541f384ee0d,},Annotations:map[string
]string{io.kubernetes.container.hash: 701f24a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4e8a9df2-d3f8-4e02-906d-3d061ad9be50 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.447997500Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=18a4f64a-0ab9-4a1a-b966-be4158662e26 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.448086014Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=18a4f64a-0ab9-4a1a-b966-be4158662e26 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:48:51 test-preload-862767 crio[711]: time="2023-09-11 11:48:51.448336644Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:bb3c40dd05482430b2e91b2a555dfca092c47f49f826e5f3f28d5766c6124f90,PodSandboxId:0c04aed6442966128cf1c5ee126610ef5cff23400b29339b51043fc98ca10141,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e,State:CONTAINER_RUNNING,CreatedAt:1694432922814999598,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-6d4b75cb6d-f5c6v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b7277dc6-857d-42fb-9087-661c5b3c05fb,},Annotations:map[string]string{io.kubernetes.container.hash: 13a446a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c25f6ca096a44c2db02dd6dea356acd29c27b744ee7a1d42ce4473ff02408fbc,PodSandboxId:3ba80edaedfbeba8f93dfc0c86d781179dff4f8cf433847859d7fb0e9b3c59a1,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694432919963514847,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespa
ce: kube-system,io.kubernetes.pod.uid: 473f77f8-304b-4222-96da-f05c82f16f33,},Annotations:map[string]string{io.kubernetes.container.hash: f3e2962f,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a863f0c2b7075a34d8b017552155d43ce0a825ea9e34c01aee26b6a634ce371a,PodSandboxId:174b2e1a874459b41388a4d3eb0117032029f6bebf449d5a2cb493bff27582c3,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:64a04a34b31fdf10b4c7fe9ff006dab818489a318115cfb284010d04e2888386,State:CONTAINER_RUNNING,CreatedAt:1694432919935991673,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-mdwrk,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1
a75dbb5-3f86-4988-8b59-5a0a3d7ea584,},Annotations:map[string]string{io.kubernetes.container.hash: c44683b7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:501170f4bdb58bdf498600f38a5dcfa83cf212c4b19d0653d6e9f40fab8a2523,PodSandboxId:4d6cc96d6f368db8a0a4738333fc7b43f533d5a59ffac6a8905b7ad764142f4f,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:378509dd1111937ca2791cf4c4814bc0647714e2ab2f4fc15396707ad1a987a2,State:CONTAINER_RUNNING,CreatedAt:1694432911643199938,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3720945685e
9b45e47ec9c5612e6a2ff,},Annotations:map[string]string{io.kubernetes.container.hash: 5b4977d5,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cd97599e2252c33ba362aac4b7387569b1d688adabe490ac099aa92bdc1927b6,PodSandboxId:52e6dd9cf7807c3f0c4264cb797904910195b0f391718e0cce559527c1deb7a6,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:13f53ed1d91e2e11aac476ee9a0269fdda6cc4874eba903efd40daf50c55eee5,State:CONTAINER_RUNNING,CreatedAt:1694432911423802994,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d10fbf84e1b2132b84d32f8783794a78,},Annotations:map[string]string{
io.kubernetes.container.hash: 987f095b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6f1dd0d581b8dd1ed616db23fce9ca1a5b83e765cdacf13002b154d4d55007f8,PodSandboxId:033758f025cf4a269cba8470bb47a290bd84159961e8973a32adcc59afeaa7dd,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:77905fafb19047cc426efb6062757e244dc1615a93b0915792f598642229d891,State:CONTAINER_RUNNING,CreatedAt:1694432911404466402,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cf1267270fd3129ea7882500807aa051,},Annotati
ons:map[string]string{io.kubernetes.container.hash: eb7ed27d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dea2ca3d4c0182ec99d760347cada50c2ccfa530d44bbaa4c384986b977331f8,PodSandboxId:e857ccd030eaf998c6212b32747c4a1ae73d375a7b161e6a9cf111439aaff8eb,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:4b6a3a220cb91e496cff56f267968c4bbb19d8593c21137d99b9bc735cb64857,State:CONTAINER_RUNNING,CreatedAt:1694432911237470880,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-test-preload-862767,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 46fc55e4592a4cda441e0541f384ee0d,},Annotations:map[string
]string{io.kubernetes.container.hash: 701f24a7,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=18a4f64a-0ab9-4a1a-b966-be4158662e26 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	bb3c40dd05482       a4ca41631cc7ac19ce1be3ebf0314ac5f47af7c711f17066006db82ee3b75b03   8 seconds ago       Running             coredns                   1                   0c04aed644296
	c25f6ca096a44       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   11 seconds ago      Running             storage-provisioner       1                   3ba80edaedfbe
	a863f0c2b7075       7a53d1e08ef58144850b48d05908b4ef5b611bff99a5a66dbcba7ab9f79433f7   11 seconds ago      Running             kube-proxy                1                   174b2e1a87445
	501170f4bdb58       03fa22539fc1ccdb96fb15098e7a02fff03d0e366ce5d80891eb0a3a8594a0c9   19 seconds ago      Running             kube-scheduler            1                   4d6cc96d6f368
	cd97599e2252c       aebe758cef4cd05b9f8cee39758227714d02f42ef3088023c1e3cd454f927a2b   20 seconds ago      Running             etcd                      1                   52e6dd9cf7807
	6f1dd0d581b8d       1f99cb6da9a82e81081f65acdad10cdca2e5ec4084f91009bdcff31dd6151d48   20 seconds ago      Running             kube-controller-manager   1                   033758f025cf4
	dea2ca3d4c018       6cab9d1bed1be49c215505c1a438ce0af66eb54b4e95f06e52037fcd36631f3d   20 seconds ago      Running             kube-apiserver            1                   e857ccd030eaf
	
	* 
	* ==> coredns [bb3c40dd05482430b2e91b2a555dfca092c47f49f826e5f3f28d5766c6124f90] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration MD5 = bbeeddb09682f41960fef01b05cb3a3d
	CoreDNS-1.8.6
	linux/amd64, go1.17.1, 13a9191
	[INFO] 127.0.0.1:43879 - 28535 "HINFO IN 5751787733586486867.9124498637505925769. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.0119663s
	
	* 
	* ==> describe nodes <==
	* Name:               test-preload-862767
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=test-preload-862767
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=test-preload-862767
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_47_10_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:47:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  test-preload-862767
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:48:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:48:48 +0000   Mon, 11 Sep 2023 11:47:03 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:48:48 +0000   Mon, 11 Sep 2023 11:47:03 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:48:48 +0000   Mon, 11 Sep 2023 11:47:03 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:48:48 +0000   Mon, 11 Sep 2023 11:48:48 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.144
	  Hostname:    test-preload-862767
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc1035cd2290426b948f0e83ec962a33
	  System UUID:                dc1035cd-2290-426b-948f-0e83ec962a33
	  Boot ID:                    447e6faa-955f-4c70-9c00-7673de81373d
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.24.4
	  Kube-Proxy Version:         v1.24.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                           CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                           ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-6d4b75cb6d-f5c6v                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     88s
	  kube-system                 etcd-test-preload-862767                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         103s
	  kube-system                 kube-apiserver-test-preload-862767             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-controller-manager-test-preload-862767    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         101s
	  kube-system                 kube-proxy-mdwrk                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         89s
	  kube-system                 kube-scheduler-test-preload-862767             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         100s
	  kube-system                 storage-provisioner                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         87s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 84s                  kube-proxy       
	  Normal  Starting                 11s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  111s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 111s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  111s (x3 over 111s)  kubelet          Node test-preload-862767 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     111s (x3 over 111s)  kubelet          Node test-preload-862767 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    111s (x3 over 111s)  kubelet          Node test-preload-862767 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  101s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 101s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  101s                 kubelet          Node test-preload-862767 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    101s                 kubelet          Node test-preload-862767 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     101s                 kubelet          Node test-preload-862767 status is now: NodeHasSufficientPID
	  Normal  NodeReady                90s                  kubelet          Node test-preload-862767 status is now: NodeReady
	  Normal  RegisteredNode           90s                  node-controller  Node test-preload-862767 event: Registered Node test-preload-862767 in Controller
	  Normal  Starting                 22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  21s (x8 over 21s)    kubelet          Node test-preload-862767 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    21s (x8 over 21s)    kubelet          Node test-preload-862767 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     21s (x7 over 21s)    kubelet          Node test-preload-862767 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           1s                   node-controller  Node test-preload-862767 event: Registered Node test-preload-862767 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep11 11:47] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.092887] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.780427] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.925322] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.152661] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.571989] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[Sep11 11:48] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.109255] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.148413] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.121428] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.239519] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[ +24.830363] systemd-fstab-generator[1098]: Ignoring "noauto" for root device
	[ +10.715947] kauditd_printk_skb: 7 callbacks suppressed
	[ +10.822675] kauditd_printk_skb: 13 callbacks suppressed
	
	* 
	* ==> etcd [cd97599e2252c33ba362aac4b7387569b1d688adabe490ac099aa92bdc1927b6] <==
	* {"level":"info","ts":"2023-09-11T11:48:33.145Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"42163c43c38ae515","local-server-version":"3.5.3","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-09-11T11:48:33.145Z","caller":"etcdserver/server.go:752","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-09-11T11:48:33.184Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 switched to configuration voters=(4762059917732013333)"}
	{"level":"info","ts":"2023-09-11T11:48:33.185Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b6240fb2000e40e9","local-member-id":"42163c43c38ae515","added-peer-id":"42163c43c38ae515","added-peer-peer-urls":["https://192.168.39.144:2380"]}
	{"level":"info","ts":"2023-09-11T11:48:33.185Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"b6240fb2000e40e9","local-member-id":"42163c43c38ae515","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:48:33.185Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:48:33.215Z","caller":"embed/etcd.go:688","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T11:48:33.215Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"42163c43c38ae515","initial-advertise-peer-urls":["https://192.168.39.144:2380"],"listen-peer-urls":["https://192.168.39.144:2380"],"advertise-client-urls":["https://192.168.39.144:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.144:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T11:48:33.215Z","caller":"embed/etcd.go:763","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-11T11:48:33.215Z","caller":"embed/etcd.go:581","msg":"serving peer traffic","address":"192.168.39.144:2380"}
	{"level":"info","ts":"2023-09-11T11:48:33.215Z","caller":"embed/etcd.go:553","msg":"cmux::serve","address":"192.168.39.144:2380"}
	{"level":"info","ts":"2023-09-11T11:48:34.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-11T11:48:34.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-11T11:48:34.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 received MsgPreVoteResp from 42163c43c38ae515 at term 2"}
	{"level":"info","ts":"2023-09-11T11:48:34.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became candidate at term 3"}
	{"level":"info","ts":"2023-09-11T11:48:34.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 received MsgVoteResp from 42163c43c38ae515 at term 3"}
	{"level":"info","ts":"2023-09-11T11:48:34.908Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"42163c43c38ae515 became leader at term 3"}
	{"level":"info","ts":"2023-09-11T11:48:34.909Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 42163c43c38ae515 elected leader 42163c43c38ae515 at term 3"}
	{"level":"info","ts":"2023-09-11T11:48:34.909Z","caller":"etcdserver/server.go:2042","msg":"published local member to cluster through raft","local-member-id":"42163c43c38ae515","local-member-attributes":"{Name:test-preload-862767 ClientURLs:[https://192.168.39.144:2379]}","request-path":"/0/members/42163c43c38ae515/attributes","cluster-id":"b6240fb2000e40e9","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:48:34.909Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:48:34.911Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"192.168.39.144:2379"}
	{"level":"info","ts":"2023-09-11T11:48:34.911Z","caller":"embed/serve.go:98","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:48:34.912Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T11:48:34.912Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T11:48:34.913Z","caller":"embed/serve.go:188","msg":"serving client traffic securely","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:48:51 up 1 min,  0 users,  load average: 1.58, 0.42, 0.14
	Linux test-preload-862767 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [dea2ca3d4c0182ec99d760347cada50c2ccfa530d44bbaa4c384986b977331f8] <==
	* I0911 11:48:37.706206       1 establishing_controller.go:76] Starting EstablishingController
	I0911 11:48:37.706261       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0911 11:48:37.706275       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0911 11:48:37.706287       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0911 11:48:37.706328       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0911 11:48:37.723786       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0911 11:48:37.797796       1 shared_informer.go:262] Caches are synced for crd-autoregister
	I0911 11:48:37.798318       1 shared_informer.go:262] Caches are synced for cluster_authentication_trust_controller
	E0911 11:48:37.804725       1 controller.go:169] Error removing old endpoints from kubernetes service: no master IPs were listed in storage, refusing to erase all endpoints for the kubernetes service
	I0911 11:48:37.830015       1 shared_informer.go:262] Caches are synced for node_authorizer
	I0911 11:48:37.856130       1 apf_controller.go:322] Running API Priority and Fairness config worker
	I0911 11:48:37.856735       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 11:48:37.857158       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:48:37.882230       1 controller.go:611] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:48:37.897734       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 11:48:38.290494       1 controller.go:132] OpenAPI AggregationController: action for item k8s_internal_local_delegation_chain_0000000000: Nothing (removed from the queue).
	I0911 11:48:38.668364       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0911 11:48:39.322214       1 controller.go:611] quota admission added evaluator for: serviceaccounts
	I0911 11:48:39.365830       1 controller.go:611] quota admission added evaluator for: deployments.apps
	I0911 11:48:39.433207       1 controller.go:611] quota admission added evaluator for: daemonsets.apps
	I0911 11:48:39.462899       1 controller.go:611] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 11:48:39.473734       1 controller.go:611] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0911 11:48:40.420676       1 controller.go:611] quota admission added evaluator for: events.events.k8s.io
	I0911 11:48:51.032881       1 controller.go:611] quota admission added evaluator for: endpoints
	I0911 11:48:51.144909       1 controller.go:611] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	* 
	* ==> kube-controller-manager [6f1dd0d581b8dd1ed616db23fce9ca1a5b83e765cdacf13002b154d4d55007f8] <==
	* I0911 11:48:50.816403       1 shared_informer.go:262] Caches are synced for ephemeral
	I0911 11:48:50.823509       1 shared_informer.go:262] Caches are synced for GC
	I0911 11:48:50.827280       1 shared_informer.go:262] Caches are synced for ReplicationController
	I0911 11:48:50.828044       1 shared_informer.go:262] Caches are synced for ReplicaSet
	I0911 11:48:50.828388       1 shared_informer.go:262] Caches are synced for TTL after finished
	I0911 11:48:50.829298       1 shared_informer.go:262] Caches are synced for stateful set
	I0911 11:48:50.840327       1 shared_informer.go:262] Caches are synced for certificate-csrapproving
	I0911 11:48:50.847099       1 shared_informer.go:262] Caches are synced for job
	I0911 11:48:50.851052       1 shared_informer.go:262] Caches are synced for PVC protection
	I0911 11:48:50.864521       1 shared_informer.go:262] Caches are synced for disruption
	I0911 11:48:50.865092       1 disruption.go:371] Sending events to api server.
	I0911 11:48:50.878195       1 shared_informer.go:262] Caches are synced for endpoint_slice
	I0911 11:48:50.901513       1 shared_informer.go:262] Caches are synced for endpoint_slice_mirroring
	I0911 11:48:50.992053       1 shared_informer.go:262] Caches are synced for taint
	I0911 11:48:50.992241       1 node_lifecycle_controller.go:1399] Initializing eviction metric for zone: 
	W0911 11:48:50.992350       1 node_lifecycle_controller.go:1014] Missing timestamp for Node test-preload-862767. Assuming now as a timestamp.
	I0911 11:48:50.992406       1 node_lifecycle_controller.go:1215] Controller detected that zone  is now in state Normal.
	I0911 11:48:50.992239       1 taint_manager.go:187] "Starting NoExecuteTaintManager"
	I0911 11:48:50.992699       1 event.go:294] "Event occurred" object="test-preload-862767" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node test-preload-862767 event: Registered Node test-preload-862767 in Controller"
	I0911 11:48:51.003368       1 shared_informer.go:262] Caches are synced for daemon sets
	I0911 11:48:51.031119       1 shared_informer.go:262] Caches are synced for resource quota
	I0911 11:48:51.035808       1 shared_informer.go:262] Caches are synced for resource quota
	I0911 11:48:51.436389       1 shared_informer.go:262] Caches are synced for garbage collector
	I0911 11:48:51.436519       1 garbagecollector.go:158] Garbage collector: all resource monitors have synced. Proceeding to collect garbage
	I0911 11:48:51.517239       1 shared_informer.go:262] Caches are synced for garbage collector
	
	* 
	* ==> kube-proxy [a863f0c2b7075a34d8b017552155d43ce0a825ea9e34c01aee26b6a634ce371a] <==
	* I0911 11:48:40.363095       1 node.go:163] Successfully retrieved node IP: 192.168.39.144
	I0911 11:48:40.363195       1 server_others.go:138] "Detected node IP" address="192.168.39.144"
	I0911 11:48:40.363258       1 server_others.go:578] "Unknown proxy mode, assuming iptables proxy" proxyMode=""
	I0911 11:48:40.409861       1 server_others.go:199] "kube-proxy running in single-stack mode, this ipFamily is not supported" ipFamily=IPv6
	I0911 11:48:40.409985       1 server_others.go:206] "Using iptables Proxier"
	I0911 11:48:40.410725       1 proxier.go:259] "Setting route_localnet=1, use nodePortAddresses to filter loopback addresses for NodePorts to skip it https://issues.k8s.io/90259"
	I0911 11:48:40.413019       1 server.go:661] "Version info" version="v1.24.4"
	I0911 11:48:40.413062       1 server.go:663] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:48:40.414798       1 config.go:317] "Starting service config controller"
	I0911 11:48:40.414910       1 shared_informer.go:255] Waiting for caches to sync for service config
	I0911 11:48:40.414939       1 config.go:226] "Starting endpoint slice config controller"
	I0911 11:48:40.414943       1 shared_informer.go:255] Waiting for caches to sync for endpoint slice config
	I0911 11:48:40.416439       1 config.go:444] "Starting node config controller"
	I0911 11:48:40.416480       1 shared_informer.go:255] Waiting for caches to sync for node config
	I0911 11:48:40.515173       1 shared_informer.go:262] Caches are synced for endpoint slice config
	I0911 11:48:40.515239       1 shared_informer.go:262] Caches are synced for service config
	I0911 11:48:40.517198       1 shared_informer.go:262] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [501170f4bdb58bdf498600f38a5dcfa83cf212c4b19d0653d6e9f40fab8a2523] <==
	* I0911 11:48:33.902010       1 serving.go:348] Generated self-signed cert in-memory
	W0911 11:48:37.725392       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 11:48:37.727440       1 authentication.go:346] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 11:48:37.727527       1 authentication.go:347] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 11:48:37.727647       1 authentication.go:348] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 11:48:37.810399       1 server.go:147] "Starting Kubernetes Scheduler" version="v1.24.4"
	I0911 11:48:37.810504       1 server.go:149] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:48:37.814152       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0911 11:48:37.814301       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0911 11:48:37.814382       1 shared_informer.go:255] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:48:37.814320       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 11:48:37.915689       1 shared_informer.go:262] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 11:47:56 UTC, ends at Mon 2023-09-11 11:48:51 UTC. --
	Sep 11 11:48:37 test-preload-862767 kubelet[1104]: E0911 11:48:37.957719    1104 pod_workers.go:951] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: No CNI configuration file in /etc/cni/net.d/. Has your network provider started?" pod="kube-system/coredns-6d4b75cb6d-f5c6v" podUID=b7277dc6-857d-42fb-9087-661c5b3c05fb
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.023520    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/1a75dbb5-3f86-4988-8b59-5a0a3d7ea584-kube-proxy\") pod \"kube-proxy-mdwrk\" (UID: \"1a75dbb5-3f86-4988-8b59-5a0a3d7ea584\") " pod="kube-system/kube-proxy-mdwrk"
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.023679    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/1a75dbb5-3f86-4988-8b59-5a0a3d7ea584-xtables-lock\") pod \"kube-proxy-mdwrk\" (UID: \"1a75dbb5-3f86-4988-8b59-5a0a3d7ea584\") " pod="kube-system/kube-proxy-mdwrk"
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.023704    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1a75dbb5-3f86-4988-8b59-5a0a3d7ea584-lib-modules\") pod \"kube-proxy-mdwrk\" (UID: \"1a75dbb5-3f86-4988-8b59-5a0a3d7ea584\") " pod="kube-system/kube-proxy-mdwrk"
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.023725    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/473f77f8-304b-4222-96da-f05c82f16f33-tmp\") pod \"storage-provisioner\" (UID: \"473f77f8-304b-4222-96da-f05c82f16f33\") " pod="kube-system/storage-provisioner"
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.023799    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9qzx\" (UniqueName: \"kubernetes.io/projected/473f77f8-304b-4222-96da-f05c82f16f33-kube-api-access-r9qzx\") pod \"storage-provisioner\" (UID: \"473f77f8-304b-4222-96da-f05c82f16f33\") " pod="kube-system/storage-provisioner"
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.023826    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4x2m\" (UniqueName: \"kubernetes.io/projected/1a75dbb5-3f86-4988-8b59-5a0a3d7ea584-kube-api-access-s4x2m\") pod \"kube-proxy-mdwrk\" (UID: \"1a75dbb5-3f86-4988-8b59-5a0a3d7ea584\") " pod="kube-system/kube-proxy-mdwrk"
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.023886    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b7277dc6-857d-42fb-9087-661c5b3c05fb-config-volume\") pod \"coredns-6d4b75cb6d-f5c6v\" (UID: \"b7277dc6-857d-42fb-9087-661c5b3c05fb\") " pod="kube-system/coredns-6d4b75cb6d-f5c6v"
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.023965    1104 reconciler.go:342] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkdt5\" (UniqueName: \"kubernetes.io/projected/b7277dc6-857d-42fb-9087-661c5b3c05fb-kube-api-access-vkdt5\") pod \"coredns-6d4b75cb6d-f5c6v\" (UID: \"b7277dc6-857d-42fb-9087-661c5b3c05fb\") " pod="kube-system/coredns-6d4b75cb6d-f5c6v"
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.023993    1104 reconciler.go:159] "Reconciler: start to sync state"
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.457133    1104 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7qxgq\" (UniqueName: \"kubernetes.io/projected/d37fccb4-5c0a-45e8-9483-63b1f0ad9210-kube-api-access-7qxgq\") pod \"d37fccb4-5c0a-45e8-9483-63b1f0ad9210\" (UID: \"d37fccb4-5c0a-45e8-9483-63b1f0ad9210\") "
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.457183    1104 reconciler.go:201] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d37fccb4-5c0a-45e8-9483-63b1f0ad9210-config-volume\") pod \"d37fccb4-5c0a-45e8-9483-63b1f0ad9210\" (UID: \"d37fccb4-5c0a-45e8-9483-63b1f0ad9210\") "
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: E0911 11:48:38.457848    1104 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: E0911 11:48:38.457986    1104 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b7277dc6-857d-42fb-9087-661c5b3c05fb-config-volume podName:b7277dc6-857d-42fb-9087-661c5b3c05fb nodeName:}" failed. No retries permitted until 2023-09-11 11:48:38.957952114 +0000 UTC m=+9.201080621 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b7277dc6-857d-42fb-9087-661c5b3c05fb-config-volume") pod "coredns-6d4b75cb6d-f5c6v" (UID: "b7277dc6-857d-42fb-9087-661c5b3c05fb") : object "kube-system"/"coredns" not registered
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: W0911 11:48:38.459272    1104 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/d37fccb4-5c0a-45e8-9483-63b1f0ad9210/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: W0911 11:48:38.459608    1104 empty_dir.go:519] Warning: Failed to clear quota on /var/lib/kubelet/pods/d37fccb4-5c0a-45e8-9483-63b1f0ad9210/volumes/kubernetes.io~projected/kube-api-access-7qxgq: clearQuota called, but quotas disabled
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.459932    1104 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d37fccb4-5c0a-45e8-9483-63b1f0ad9210-kube-api-access-7qxgq" (OuterVolumeSpecName: "kube-api-access-7qxgq") pod "d37fccb4-5c0a-45e8-9483-63b1f0ad9210" (UID: "d37fccb4-5c0a-45e8-9483-63b1f0ad9210"). InnerVolumeSpecName "kube-api-access-7qxgq". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.460233    1104 operation_generator.go:863] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d37fccb4-5c0a-45e8-9483-63b1f0ad9210-config-volume" (OuterVolumeSpecName: "config-volume") pod "d37fccb4-5c0a-45e8-9483-63b1f0ad9210" (UID: "d37fccb4-5c0a-45e8-9483-63b1f0ad9210"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue ""
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.557798    1104 reconciler.go:384] "Volume detached for volume \"kube-api-access-7qxgq\" (UniqueName: \"kubernetes.io/projected/d37fccb4-5c0a-45e8-9483-63b1f0ad9210-kube-api-access-7qxgq\") on node \"test-preload-862767\" DevicePath \"\""
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: I0911 11:48:38.557860    1104 reconciler.go:384] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d37fccb4-5c0a-45e8-9483-63b1f0ad9210-config-volume\") on node \"test-preload-862767\" DevicePath \"\""
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: E0911 11:48:38.959496    1104 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 11 11:48:38 test-preload-862767 kubelet[1104]: E0911 11:48:38.959655    1104 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b7277dc6-857d-42fb-9087-661c5b3c05fb-config-volume podName:b7277dc6-857d-42fb-9087-661c5b3c05fb nodeName:}" failed. No retries permitted until 2023-09-11 11:48:39.959627471 +0000 UTC m=+10.202755987 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b7277dc6-857d-42fb-9087-661c5b3c05fb-config-volume") pod "coredns-6d4b75cb6d-f5c6v" (UID: "b7277dc6-857d-42fb-9087-661c5b3c05fb") : object "kube-system"/"coredns" not registered
	Sep 11 11:48:39 test-preload-862767 kubelet[1104]: E0911 11:48:39.968106    1104 configmap.go:193] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Sep 11 11:48:39 test-preload-862767 kubelet[1104]: E0911 11:48:39.968205    1104 nestedpendingoperations.go:335] Operation for "{volumeName:kubernetes.io/configmap/b7277dc6-857d-42fb-9087-661c5b3c05fb-config-volume podName:b7277dc6-857d-42fb-9087-661c5b3c05fb nodeName:}" failed. No retries permitted until 2023-09-11 11:48:41.968156144 +0000 UTC m=+12.211284652 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/b7277dc6-857d-42fb-9087-661c5b3c05fb-config-volume") pod "coredns-6d4b75cb6d-f5c6v" (UID: "b7277dc6-857d-42fb-9087-661c5b3c05fb") : object "kube-system"/"coredns" not registered
	Sep 11 11:48:42 test-preload-862767 kubelet[1104]: I0911 11:48:42.055336    1104 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=d37fccb4-5c0a-45e8-9483-63b1f0ad9210 path="/var/lib/kubelet/pods/d37fccb4-5c0a-45e8-9483-63b1f0ad9210/volumes"
	
	* 
	* ==> storage-provisioner [c25f6ca096a44c2db02dd6dea356acd29c27b744ee7a1d42ce4473ff02408fbc] <==
	* I0911 11:48:40.202421       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p test-preload-862767 -n test-preload-862767
helpers_test.go:261: (dbg) Run:  kubectl --context test-preload-862767 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPreload FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "test-preload-862767" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-862767
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-862767: (1.172220889s)
--- FAIL: TestPreload (188.27s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (172.84s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.527620922.exe start -p running-upgrade-569346 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:133: (dbg) Non-zero exit: /tmp/minikube-v1.6.2.527620922.exe start -p running-upgrade-569346 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: exit status 70 (1.587559725s)

                                                
                                                
-- stdout --
	! [running-upgrade-569346] minikube v1.6.2 on Ubuntu 20.04
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	  - KUBECONFIG=/tmp/legacy_kubeconfig340435916
	* Selecting 'kvm2' driver from user configuration (alternates: [none])
	* Downloading VM boot image ...

                                                
                                                
-- /stdout --
** stderr ** 
	* minikube 1.31.2 is available! Download it: https://github.com/kubernetes/minikube/releases/tag/v1.31.2
	* To disable this notice, run: 'minikube config set WantUpdateNotification false'
	
	
	! 'kvm2' driver reported an issue: /usr/bin/virsh domcapabilities --virttype kvm failed:
	error: failed to get emulator capabilities
	error: invalid argument: KVM is not supported by '/usr/bin/qemu-system-x86_64' on this host
	* Suggestion: Follow your Linux distribution instructions for configuring KVM
	* Documentation: https://minikube.sigs.k8s.io/docs/reference/drivers/kvm2/
	
	    > minikube-v1.6.0.iso.sha256: 65 B / 65 B [--------------] 100.00% ? p/s 0s    > minikube-v1.6.0.iso: 27.15 MiB / 150.93 MiB [-->_________] 17.99% ? p/s ?    > minikube-v1.6.0.iso: 63.80 MiB / 150.93 MiB [----->______] 42.27% ? p/s ?    > minikube-v1.6.0.iso: 115.44 MiB / 150.93 MiB [-------->__] 76.48% ? p/s ?    > minikube-v1.6.0.iso: 150.93 MiB / 150.93 MiB [] 100.00% 284.93 MiB p/s 1s* 
	X Failed to cache ISO: rename /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/minikube-v1.6.0.iso.download /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/minikube-v1.6.0.iso: no such file or directory
	* 
	* minikube is exiting due to an error. If the above message is not useful, open an issue:
	  - https://github.com/kubernetes/minikube/issues/new/choose

                                                
                                                
** /stderr **
version_upgrade_test.go:133: (dbg) Run:  /tmp/minikube-v1.6.2.527620922.exe start -p running-upgrade-569346 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
E0911 11:51:22.842707 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
version_upgrade_test.go:133: (dbg) Done: /tmp/minikube-v1.6.2.527620922.exe start -p running-upgrade-569346 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m27.099299938s)
version_upgrade_test.go:143: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-569346 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:143: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p running-upgrade-569346 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (20.635344223s)

                                                
                                                
-- stdout --
	* [running-upgrade-569346] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the kvm2 driver based on existing profile
	* Starting control plane node running-upgrade-569346 in cluster running-upgrade-569346
	* Updating the running kvm2 "running-upgrade-569346" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:53:29.395234 2247040 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:53:29.395420 2247040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:53:29.395428 2247040 out.go:309] Setting ErrFile to fd 2...
	I0911 11:53:29.395434 2247040 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:53:29.395754 2247040 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:53:29.396614 2247040 out.go:303] Setting JSON to false
	I0911 11:53:29.397940 2247040 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":236160,"bootTime":1694197049,"procs":230,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:53:29.398035 2247040 start.go:138] virtualization: kvm guest
	I0911 11:53:29.400769 2247040 out.go:177] * [running-upgrade-569346] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:53:29.403322 2247040 notify.go:220] Checking for updates...
	I0911 11:53:29.404130 2247040 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:53:29.409051 2247040 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:53:29.410875 2247040 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:53:29.412432 2247040 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:53:29.413906 2247040 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:53:29.415397 2247040 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:53:29.417457 2247040 config.go:182] Loaded profile config "running-upgrade-569346": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0911 11:53:29.417472 2247040 start_flags.go:686] config upgrade: Driver=kvm2
	I0911 11:53:29.417486 2247040 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0911 11:53:29.417600 2247040 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/running-upgrade-569346/config.json ...
	I0911 11:53:29.418475 2247040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:53:29.418543 2247040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:53:29.441648 2247040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36957
	I0911 11:53:29.445770 2247040 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:53:29.446571 2247040 main.go:141] libmachine: Using API Version  1
	I0911 11:53:29.446592 2247040 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:53:29.446991 2247040 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:53:29.447141 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .DriverName
	I0911 11:53:29.449326 2247040 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0911 11:53:29.451026 2247040 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:53:29.451510 2247040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:53:29.451571 2247040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:53:29.474302 2247040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33285
	I0911 11:53:29.474951 2247040 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:53:29.475641 2247040 main.go:141] libmachine: Using API Version  1
	I0911 11:53:29.475670 2247040 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:53:29.476153 2247040 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:53:29.476375 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .DriverName
	I0911 11:53:29.533935 2247040 out.go:177] * Using the kvm2 driver based on existing profile
	I0911 11:53:29.535673 2247040 start.go:298] selected driver: kvm2
	I0911 11:53:29.535696 2247040 start.go:902] validating driver "kvm2" against &{Name:running-upgrade-569346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.45 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0911 11:53:29.535874 2247040 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:53:29.536969 2247040 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:53:29.537105 2247040 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 11:53:29.555647 2247040 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 11:53:29.555986 2247040 cni.go:84] Creating CNI manager for ""
	I0911 11:53:29.555998 2247040 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0911 11:53:29.556004 2247040 start_flags.go:321] config:
	{Name:running-upgrade-569346 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.45 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0911 11:53:29.556202 2247040 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:53:29.558402 2247040 out.go:177] * Starting control plane node running-upgrade-569346 in cluster running-upgrade-569346
	I0911 11:53:29.560073 2247040 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0911 11:53:29.592521 2247040 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0911 11:53:29.592715 2247040 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/running-upgrade-569346/config.json ...
	I0911 11:53:29.593129 2247040 start.go:365] acquiring machines lock for running-upgrade-569346: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 11:53:29.593450 2247040 cache.go:107] acquiring lock: {Name:mk84e75269bc58adba7d0b682b95ab327a8a8363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:53:29.593497 2247040 cache.go:107] acquiring lock: {Name:mkc22284b9f892c2fcb14c256537036255115bd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:53:29.593534 2247040 cache.go:115] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0911 11:53:29.593543 2247040 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 102.496µs
	I0911 11:53:29.593554 2247040 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0911 11:53:29.593571 2247040 cache.go:107] acquiring lock: {Name:mka2989037021eece020a023217f187fc2a2deac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:53:29.593523 2247040 cache.go:107] acquiring lock: {Name:mk3f982b58c40c85cf5eadf9fc9a698dc136a916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:53:29.593636 2247040 cache.go:107] acquiring lock: {Name:mk0c748a64b7c94f3a24d0cfd25a318aa01b41b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:53:29.593666 2247040 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.3-0
	I0911 11:53:29.593696 2247040 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.17.0
	I0911 11:53:29.593736 2247040 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.5
	I0911 11:53:29.593454 2247040 cache.go:107] acquiring lock: {Name:mk9b9aecf9eae48ef42d9bbe9b560e5eb4ecc0ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:53:29.593862 2247040 cache.go:107] acquiring lock: {Name:mk6ddb59fe1ed56d56a8858d6009e9b25831f2ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:53:29.593892 2247040 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0911 11:53:29.593901 2247040 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0911 11:53:29.593962 2247040 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.17.0
	I0911 11:53:29.593958 2247040 cache.go:107] acquiring lock: {Name:mka28b32897ff39a89627329cc92986685d2f4e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:53:29.594153 2247040 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.17.0
	I0911 11:53:29.595352 2247040 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.17.0
	I0911 11:53:29.595373 2247040 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.17.0
	I0911 11:53:29.595390 2247040 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.3-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.3-0
	I0911 11:53:29.595450 2247040 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.17.0
	I0911 11:53:29.595454 2247040 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0911 11:53:29.595602 2247040 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.5: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.5
	I0911 11:53:29.595686 2247040 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.17.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.17.0
	I0911 11:53:29.762204 2247040 cache.go:162] opening:  /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0
	I0911 11:53:29.767763 2247040 cache.go:162] opening:  /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5
	I0911 11:53:29.772407 2247040 cache.go:162] opening:  /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0911 11:53:29.773623 2247040 cache.go:162] opening:  /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0
	I0911 11:53:29.792213 2247040 cache.go:162] opening:  /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0
	I0911 11:53:29.809220 2247040 cache.go:162] opening:  /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0
	I0911 11:53:29.889072 2247040 cache.go:162] opening:  /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0
	I0911 11:53:29.932653 2247040 cache.go:157] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0911 11:53:29.932694 2247040 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 339.194592ms
	I0911 11:53:29.932710 2247040 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0911 11:53:30.303870 2247040 cache.go:157] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0911 11:53:30.303915 2247040 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 710.282307ms
	I0911 11:53:30.303938 2247040 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0911 11:53:30.776087 2247040 cache.go:157] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0911 11:53:30.776189 2247040 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 1.182328502s
	I0911 11:53:30.776219 2247040 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0911 11:53:30.885662 2247040 cache.go:157] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0911 11:53:30.885697 2247040 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 1.292264332s
	I0911 11:53:30.885712 2247040 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0911 11:53:31.069400 2247040 cache.go:157] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0911 11:53:31.069440 2247040 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 1.475943914s
	I0911 11:53:31.069460 2247040 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0911 11:53:31.334262 2247040 cache.go:157] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0911 11:53:31.334303 2247040 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 1.740732199s
	I0911 11:53:31.334320 2247040 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0911 11:53:31.496910 2247040 cache.go:157] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0911 11:53:31.496955 2247040 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 1.902996164s
	I0911 11:53:31.496973 2247040 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0911 11:53:31.497001 2247040 cache.go:87] Successfully saved all images to host disk.
	I0911 11:53:46.170147 2247040 start.go:369] acquired machines lock for "running-upgrade-569346" in 16.576962227s
	I0911 11:53:46.170214 2247040 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:53:46.170223 2247040 fix.go:54] fixHost starting: minikube
	I0911 11:53:46.170679 2247040 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:53:46.170728 2247040 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:53:46.189140 2247040 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36601
	I0911 11:53:46.189655 2247040 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:53:46.190313 2247040 main.go:141] libmachine: Using API Version  1
	I0911 11:53:46.190341 2247040 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:53:46.190746 2247040 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:53:46.190947 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .DriverName
	I0911 11:53:46.191108 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetState
	I0911 11:53:46.193000 2247040 fix.go:102] recreateIfNeeded on running-upgrade-569346: state=Running err=<nil>
	W0911 11:53:46.193022 2247040 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:53:46.195853 2247040 out.go:177] * Updating the running kvm2 "running-upgrade-569346" VM ...
	I0911 11:53:46.197396 2247040 machine.go:88] provisioning docker machine ...
	I0911 11:53:46.197437 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .DriverName
	I0911 11:53:46.197765 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetMachineName
	I0911 11:53:46.197974 2247040 buildroot.go:166] provisioning hostname "running-upgrade-569346"
	I0911 11:53:46.198003 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetMachineName
	I0911 11:53:46.198158 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHHostname
	I0911 11:53:46.201069 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:46.201515 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:23:cf", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:5a:23:cf Iaid: IPaddr:192.168.50.45 Prefix:24 Hostname:running-upgrade-569346 Clientid:01:52:54:00:5a:23:cf}
	I0911 11:53:46.201566 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined IP address 192.168.50.45 and MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:46.201785 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHPort
	I0911 11:53:46.202010 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHKeyPath
	I0911 11:53:46.202199 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHKeyPath
	I0911 11:53:46.202406 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHUsername
	I0911 11:53:46.202623 2247040 main.go:141] libmachine: Using SSH client type: native
	I0911 11:53:46.203210 2247040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.45 22 <nil> <nil>}
	I0911 11:53:46.203236 2247040 main.go:141] libmachine: About to run SSH command:
	sudo hostname running-upgrade-569346 && echo "running-upgrade-569346" | sudo tee /etc/hostname
	I0911 11:53:46.325968 2247040 main.go:141] libmachine: SSH cmd err, output: <nil>: running-upgrade-569346
	
	I0911 11:53:46.326008 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHHostname
	I0911 11:53:46.329105 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:46.329608 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:23:cf", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:5a:23:cf Iaid: IPaddr:192.168.50.45 Prefix:24 Hostname:running-upgrade-569346 Clientid:01:52:54:00:5a:23:cf}
	I0911 11:53:46.329648 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined IP address 192.168.50.45 and MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:46.329899 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHPort
	I0911 11:53:46.330139 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHKeyPath
	I0911 11:53:46.330323 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHKeyPath
	I0911 11:53:46.330469 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHUsername
	I0911 11:53:46.330664 2247040 main.go:141] libmachine: Using SSH client type: native
	I0911 11:53:46.331262 2247040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.45 22 <nil> <nil>}
	I0911 11:53:46.331289 2247040 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\srunning-upgrade-569346' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 running-upgrade-569346/g' /etc/hosts;
				else 
					echo '127.0.1.1 running-upgrade-569346' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:53:46.442000 2247040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:53:46.442038 2247040 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 11:53:46.442067 2247040 buildroot.go:174] setting up certificates
	I0911 11:53:46.442081 2247040 provision.go:83] configureAuth start
	I0911 11:53:46.442098 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetMachineName
	I0911 11:53:46.442418 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetIP
	I0911 11:53:46.445153 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:46.445529 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:23:cf", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:5a:23:cf Iaid: IPaddr:192.168.50.45 Prefix:24 Hostname:running-upgrade-569346 Clientid:01:52:54:00:5a:23:cf}
	I0911 11:53:46.445562 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined IP address 192.168.50.45 and MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:46.445733 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHHostname
	I0911 11:53:46.447926 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:46.448286 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:23:cf", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:5a:23:cf Iaid: IPaddr:192.168.50.45 Prefix:24 Hostname:running-upgrade-569346 Clientid:01:52:54:00:5a:23:cf}
	I0911 11:53:46.448317 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined IP address 192.168.50.45 and MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:46.448505 2247040 provision.go:138] copyHostCerts
	I0911 11:53:46.448576 2247040 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 11:53:46.448590 2247040 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:53:46.448664 2247040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 11:53:46.448799 2247040 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 11:53:46.448825 2247040 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:53:46.448857 2247040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 11:53:46.448958 2247040 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 11:53:46.448971 2247040 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:53:46.449006 2247040 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 11:53:46.449075 2247040 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.running-upgrade-569346 san=[192.168.50.45 192.168.50.45 localhost 127.0.0.1 minikube running-upgrade-569346]
	I0911 11:53:46.744560 2247040 provision.go:172] copyRemoteCerts
	I0911 11:53:46.744634 2247040 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:53:46.744671 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHHostname
	I0911 11:53:46.748030 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:46.748447 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:23:cf", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:5a:23:cf Iaid: IPaddr:192.168.50.45 Prefix:24 Hostname:running-upgrade-569346 Clientid:01:52:54:00:5a:23:cf}
	I0911 11:53:46.748513 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined IP address 192.168.50.45 and MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:46.748712 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHPort
	I0911 11:53:46.748974 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHKeyPath
	I0911 11:53:46.749150 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHUsername
	I0911 11:53:46.749358 2247040 sshutil.go:53] new ssh client: &{IP:192.168.50.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/running-upgrade-569346/id_rsa Username:docker}
	I0911 11:53:46.850517 2247040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:53:46.867398 2247040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0911 11:53:46.883658 2247040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 11:53:46.899205 2247040 provision.go:86] duration metric: configureAuth took 457.103101ms
	I0911 11:53:46.899241 2247040 buildroot.go:189] setting minikube options for container-runtime
	I0911 11:53:46.899448 2247040 config.go:182] Loaded profile config "running-upgrade-569346": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0911 11:53:46.899534 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHHostname
	I0911 11:53:46.902345 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:46.902831 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:23:cf", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:5a:23:cf Iaid: IPaddr:192.168.50.45 Prefix:24 Hostname:running-upgrade-569346 Clientid:01:52:54:00:5a:23:cf}
	I0911 11:53:46.902866 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined IP address 192.168.50.45 and MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:46.903049 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHPort
	I0911 11:53:46.903268 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHKeyPath
	I0911 11:53:46.903452 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHKeyPath
	I0911 11:53:46.903600 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHUsername
	I0911 11:53:46.903805 2247040 main.go:141] libmachine: Using SSH client type: native
	I0911 11:53:46.904286 2247040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.45 22 <nil> <nil>}
	I0911 11:53:46.904309 2247040 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:53:47.493596 2247040 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:53:47.493630 2247040 machine.go:91] provisioned docker machine in 1.296216009s
	I0911 11:53:47.493644 2247040 start.go:300] post-start starting for "running-upgrade-569346" (driver="kvm2")
	I0911 11:53:47.493657 2247040 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:53:47.493678 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .DriverName
	I0911 11:53:47.494112 2247040 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:53:47.494156 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHHostname
	I0911 11:53:47.497293 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:47.497762 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:23:cf", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:5a:23:cf Iaid: IPaddr:192.168.50.45 Prefix:24 Hostname:running-upgrade-569346 Clientid:01:52:54:00:5a:23:cf}
	I0911 11:53:47.497797 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined IP address 192.168.50.45 and MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:47.498040 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHPort
	I0911 11:53:47.498274 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHKeyPath
	I0911 11:53:47.498480 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHUsername
	I0911 11:53:47.498652 2247040 sshutil.go:53] new ssh client: &{IP:192.168.50.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/running-upgrade-569346/id_rsa Username:docker}
	I0911 11:53:47.588648 2247040 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:53:47.593591 2247040 info.go:137] Remote host: Buildroot 2019.02.7
	I0911 11:53:47.593626 2247040 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 11:53:47.593725 2247040 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 11:53:47.593853 2247040 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 11:53:47.593998 2247040 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:53:47.605670 2247040 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:53:47.634987 2247040 start.go:303] post-start completed in 141.324628ms
	I0911 11:53:47.635013 2247040 fix.go:56] fixHost completed within 1.464790652s
	I0911 11:53:47.635038 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHHostname
	I0911 11:53:47.638582 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:47.639054 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:23:cf", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:5a:23:cf Iaid: IPaddr:192.168.50.45 Prefix:24 Hostname:running-upgrade-569346 Clientid:01:52:54:00:5a:23:cf}
	I0911 11:53:47.639094 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined IP address 192.168.50.45 and MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:47.639368 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHPort
	I0911 11:53:47.639660 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHKeyPath
	I0911 11:53:47.639861 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHKeyPath
	I0911 11:53:47.640037 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHUsername
	I0911 11:53:47.640224 2247040 main.go:141] libmachine: Using SSH client type: native
	I0911 11:53:47.640994 2247040 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.45 22 <nil> <nil>}
	I0911 11:53:47.641017 2247040 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0911 11:53:47.754760 2247040 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694433227.751864304
	
	I0911 11:53:47.754790 2247040 fix.go:206] guest clock: 1694433227.751864304
	I0911 11:53:47.754798 2247040 fix.go:219] Guest: 2023-09-11 11:53:47.751864304 +0000 UTC Remote: 2023-09-11 11:53:47.635016984 +0000 UTC m=+18.301201579 (delta=116.84732ms)
	I0911 11:53:47.754820 2247040 fix.go:190] guest clock delta is within tolerance: 116.84732ms
	I0911 11:53:47.754825 2247040 start.go:83] releasing machines lock for "running-upgrade-569346", held for 1.584641657s
	I0911 11:53:47.754872 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .DriverName
	I0911 11:53:47.755190 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetIP
	I0911 11:53:47.759833 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:47.760331 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:23:cf", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:5a:23:cf Iaid: IPaddr:192.168.50.45 Prefix:24 Hostname:running-upgrade-569346 Clientid:01:52:54:00:5a:23:cf}
	I0911 11:53:47.760367 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined IP address 192.168.50.45 and MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:47.760752 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .DriverName
	I0911 11:53:47.761500 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .DriverName
	I0911 11:53:47.761725 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .DriverName
	I0911 11:53:47.761828 2247040 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:53:47.761908 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHHostname
	I0911 11:53:47.762237 2247040 ssh_runner.go:195] Run: cat /version.json
	I0911 11:53:47.762270 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHHostname
	I0911 11:53:47.765695 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:47.766059 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:47.766435 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:23:cf", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:5a:23:cf Iaid: IPaddr:192.168.50.45 Prefix:24 Hostname:running-upgrade-569346 Clientid:01:52:54:00:5a:23:cf}
	I0911 11:53:47.766471 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined IP address 192.168.50.45 and MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:47.766701 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:5a:23:cf", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:5a:23:cf Iaid: IPaddr:192.168.50.45 Prefix:24 Hostname:running-upgrade-569346 Clientid:01:52:54:00:5a:23:cf}
	I0911 11:53:47.766745 2247040 main.go:141] libmachine: (running-upgrade-569346) DBG | domain running-upgrade-569346 has defined IP address 192.168.50.45 and MAC address 52:54:00:5a:23:cf in network minikube-net
	I0911 11:53:47.766873 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHPort
	I0911 11:53:47.767083 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHPort
	I0911 11:53:47.767108 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHKeyPath
	I0911 11:53:47.767226 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHKeyPath
	I0911 11:53:47.767425 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHUsername
	I0911 11:53:47.767465 2247040 main.go:141] libmachine: (running-upgrade-569346) Calling .GetSSHUsername
	I0911 11:53:47.767644 2247040 sshutil.go:53] new ssh client: &{IP:192.168.50.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/running-upgrade-569346/id_rsa Username:docker}
	I0911 11:53:47.767650 2247040 sshutil.go:53] new ssh client: &{IP:192.168.50.45 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/running-upgrade-569346/id_rsa Username:docker}
	W0911 11:53:47.892851 2247040 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0911 11:53:47.892941 2247040 ssh_runner.go:195] Run: systemctl --version
	I0911 11:53:47.900194 2247040 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:53:48.092440 2247040 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 11:53:48.101817 2247040 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 11:53:48.101905 2247040 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:53:48.110546 2247040 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0911 11:53:48.110577 2247040 start.go:466] detecting cgroup driver to use...
	I0911 11:53:48.110652 2247040 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:53:48.126299 2247040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:53:48.140552 2247040 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:53:48.140729 2247040 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:53:48.154406 2247040 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:53:48.167747 2247040 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0911 11:53:48.183117 2247040 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0911 11:53:48.183204 2247040 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:53:48.367207 2247040 docker.go:212] disabling docker service ...
	I0911 11:53:48.367299 2247040 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:53:49.394805 2247040 ssh_runner.go:235] Completed: sudo systemctl stop -f docker.socket: (1.027470818s)
	I0911 11:53:49.394874 2247040 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:53:49.418278 2247040 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:53:49.690690 2247040 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:53:49.905306 2247040 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:53:49.919988 2247040 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:53:49.938733 2247040 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0911 11:53:49.938826 2247040 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:53:49.953532 2247040 out.go:177] 
	W0911 11:53:49.955528 2247040 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0911 11:53:49.955564 2247040 out.go:239] * 
	* 
	W0911 11:53:49.956522 2247040 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 11:53:49.958463 2247040 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:145: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p running-upgrade-569346 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
panic.go:522: *** TestRunningBinaryUpgrade FAILED at 2023-09-11 11:53:49.981993968 +0000 UTC m=+3435.294618858
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-569346 -n running-upgrade-569346
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p running-upgrade-569346 -n running-upgrade-569346: exit status 4 (282.133378ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 11:53:50.217296 2247464 status.go:415] kubeconfig endpoint: extract IP: "running-upgrade-569346" does not appear in /home/jenkins/minikube-integration/17223-2215273/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 4 (may be ok)
helpers_test.go:241: "running-upgrade-569346" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "running-upgrade-569346" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-569346
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-569346: (1.374530568s)
--- FAIL: TestRunningBinaryUpgrade (172.84s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (316.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:196: (dbg) Run:  /tmp/minikube-v1.6.2.2279816089.exe start -p stopped-upgrade-715426 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:196: (dbg) Done: /tmp/minikube-v1.6.2.2279816089.exe start -p stopped-upgrade-715426 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m33.570306688s)
version_upgrade_test.go:205: (dbg) Run:  /tmp/minikube-v1.6.2.2279816089.exe -p stopped-upgrade-715426 stop
version_upgrade_test.go:205: (dbg) Done: /tmp/minikube-v1.6.2.2279816089.exe -p stopped-upgrade-715426 stop: (1m33.643695772s)
version_upgrade_test.go:211: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-715426 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:211: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p stopped-upgrade-715426 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90 (1m8.872837861s)

                                                
                                                
-- stdout --
	* [stopped-upgrade-715426] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	* Using the kvm2 driver based on existing profile
	* Starting control plane node stopped-upgrade-715426 in cluster stopped-upgrade-715426
	* Restarting existing kvm2 VM for "stopped-upgrade-715426" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:55:06.413694 2250323 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:55:06.413827 2250323 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:55:06.413839 2250323 out.go:309] Setting ErrFile to fd 2...
	I0911 11:55:06.413846 2250323 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:55:06.414058 2250323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:55:06.414669 2250323 out.go:303] Setting JSON to false
	I0911 11:55:06.415682 2250323 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":236257,"bootTime":1694197049,"procs":241,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:55:06.415759 2250323 start.go:138] virtualization: kvm guest
	I0911 11:55:06.418321 2250323 out.go:177] * [stopped-upgrade-715426] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:55:06.419969 2250323 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:55:06.419980 2250323 notify.go:220] Checking for updates...
	I0911 11:55:06.421571 2250323 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:55:06.423262 2250323 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:55:06.425116 2250323 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:55:06.426715 2250323 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:55:06.429519 2250323 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:55:06.431768 2250323 config.go:182] Loaded profile config "stopped-upgrade-715426": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0911 11:55:06.431790 2250323 start_flags.go:686] config upgrade: Driver=kvm2
	I0911 11:55:06.431808 2250323 start_flags.go:698] config upgrade: KicBaseImage=gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b
	I0911 11:55:06.431913 2250323 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/stopped-upgrade-715426/config.json ...
	I0911 11:55:06.432515 2250323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:55:06.432592 2250323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:55:06.449469 2250323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44639
	I0911 11:55:06.449891 2250323 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:55:06.450540 2250323 main.go:141] libmachine: Using API Version  1
	I0911 11:55:06.450571 2250323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:55:06.450975 2250323 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:55:06.451189 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .DriverName
	I0911 11:55:06.453873 2250323 out.go:177] * Kubernetes 1.28.1 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.1
	I0911 11:55:06.455786 2250323 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:55:06.456167 2250323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:55:06.456221 2250323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:55:06.471574 2250323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37417
	I0911 11:55:06.472064 2250323 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:55:06.472634 2250323 main.go:141] libmachine: Using API Version  1
	I0911 11:55:06.472668 2250323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:55:06.473051 2250323 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:55:06.473270 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .DriverName
	I0911 11:55:06.515862 2250323 out.go:177] * Using the kvm2 driver based on existing profile
	I0911 11:55:06.517475 2250323 start.go:298] selected driver: kvm2
	I0911 11:55:06.517495 2250323 start.go:902] validating driver "kvm2" against &{Name:stopped-upgrade-715426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 Clust
erName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.66 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0911 11:55:06.517605 2250323 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:55:06.518305 2250323 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:55:06.518425 2250323 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 11:55:06.534670 2250323 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 11:55:06.535012 2250323 cni.go:84] Creating CNI manager for ""
	I0911 11:55:06.535028 2250323 cni.go:129] EnableDefaultCNI is true, recommending bridge
	I0911 11:55:06.535039 2250323 start_flags.go:321] config:
	{Name:stopped-upgrade-715426 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.6.0.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver:kvm2 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.99.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:0 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser: SSHKey: SSHPort:0 KubernetesConfig:{KubernetesVersion:v1.17.0 ClusterName: Namespace: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:true CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:minikube IP:192.168.50.66 Port:8443 KubernetesVersion:v1.17.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[] StartHostTimeout:0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString: Mount9PVersion: MountGID: MountIP: MountMSize:0 MountOptions:[] MountPort:0 MountType: MountUID: BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:0s}
	I0911 11:55:06.535226 2250323 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:55:06.537530 2250323 out.go:177] * Starting control plane node stopped-upgrade-715426 in cluster stopped-upgrade-715426
	I0911 11:55:06.539406 2250323 preload.go:132] Checking if preload exists for k8s version v1.17.0 and runtime crio
	W0911 11:55:06.567459 2250323 preload.go:115] https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.17.0/preloaded-images-k8s-v18-v1.17.0-cri-o-overlay-amd64.tar.lz4 status code: 404
	I0911 11:55:06.567687 2250323 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/stopped-upgrade-715426/config.json ...
	I0911 11:55:06.567787 2250323 cache.go:107] acquiring lock: {Name:mk84e75269bc58adba7d0b682b95ab327a8a8363 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:55:06.567802 2250323 cache.go:107] acquiring lock: {Name:mka28b32897ff39a89627329cc92986685d2f4e7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:55:06.567903 2250323 cache.go:115] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0911 11:55:06.567912 2250323 cache.go:115] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 exists
	I0911 11:55:06.567920 2250323 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 145.619µs
	I0911 11:55:06.567932 2250323 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0911 11:55:06.567929 2250323 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.17.0" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0" took 135.375µs
	I0911 11:55:06.567940 2250323 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.17.0 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.17.0 succeeded
	I0911 11:55:06.567948 2250323 cache.go:107] acquiring lock: {Name:mk0c748a64b7c94f3a24d0cfd25a318aa01b41b6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:55:06.568034 2250323 start.go:365] acquiring machines lock for stopped-upgrade-715426: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 11:55:06.568041 2250323 cache.go:107] acquiring lock: {Name:mkc22284b9f892c2fcb14c256537036255115bd4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:55:06.568062 2250323 cache.go:107] acquiring lock: {Name:mk9b9aecf9eae48ef42d9bbe9b560e5eb4ecc0ee Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:55:06.568139 2250323 cache.go:115] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 exists
	I0911 11:55:06.568073 2250323 cache.go:115] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 exists
	I0911 11:55:06.568185 2250323 cache.go:115] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 exists
	I0911 11:55:06.568192 2250323 cache.go:96] cache image "registry.k8s.io/coredns:1.6.5" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5" took 234.241µs
	I0911 11:55:06.568214 2250323 cache.go:80] save to tar file registry.k8s.io/coredns:1.6.5 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.5 succeeded
	I0911 11:55:06.567899 2250323 cache.go:107] acquiring lock: {Name:mka2989037021eece020a023217f187fc2a2deac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:55:06.568208 2250323 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.17.0" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0" took 296.835µs
	I0911 11:55:06.568235 2250323 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.17.0 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.17.0 succeeded
	I0911 11:55:06.567919 2250323 cache.go:107] acquiring lock: {Name:mk3f982b58c40c85cf5eadf9fc9a698dc136a916 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:55:06.567954 2250323 cache.go:107] acquiring lock: {Name:mk6ddb59fe1ed56d56a8858d6009e9b25831f2ce Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:55:06.568155 2250323 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.17.0" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0" took 241.679µs
	I0911 11:55:06.568273 2250323 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.17.0 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.17.0 succeeded
	I0911 11:55:06.568315 2250323 cache.go:115] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 exists
	I0911 11:55:06.568332 2250323 cache.go:96] cache image "registry.k8s.io/etcd:3.4.3-0" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0" took 477.809µs
	I0911 11:55:06.568341 2250323 cache.go:80] save to tar file registry.k8s.io/etcd:3.4.3-0 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.3-0 succeeded
	I0911 11:55:06.568347 2250323 cache.go:115] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 exists
	I0911 11:55:06.568354 2250323 cache.go:115] /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 exists
	I0911 11:55:06.568379 2250323 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.17.0" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0" took 425.808µs
	I0911 11:55:06.568394 2250323 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.17.0 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.17.0 succeeded
	I0911 11:55:06.568355 2250323 cache.go:96] cache image "registry.k8s.io/pause:3.1" -> "/home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1" took 500.454µs
	I0911 11:55:06.568403 2250323 cache.go:80] save to tar file registry.k8s.io/pause:3.1 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 succeeded
	I0911 11:55:06.568411 2250323 cache.go:87] Successfully saved all images to host disk.
	I0911 11:55:31.488080 2250323 start.go:369] acquired machines lock for "stopped-upgrade-715426" in 24.920005747s
	I0911 11:55:31.488143 2250323 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:55:31.488154 2250323 fix.go:54] fixHost starting: minikube
	I0911 11:55:31.488577 2250323 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:55:31.488652 2250323 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:55:31.506576 2250323 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37903
	I0911 11:55:31.507023 2250323 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:55:31.507567 2250323 main.go:141] libmachine: Using API Version  1
	I0911 11:55:31.507592 2250323 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:55:31.507949 2250323 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:55:31.508187 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .DriverName
	I0911 11:55:31.508380 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetState
	I0911 11:55:31.510261 2250323 fix.go:102] recreateIfNeeded on stopped-upgrade-715426: state=Stopped err=<nil>
	I0911 11:55:31.510298 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .DriverName
	W0911 11:55:31.510523 2250323 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:55:31.512531 2250323 out.go:177] * Restarting existing kvm2 VM for "stopped-upgrade-715426" ...
	I0911 11:55:31.514186 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .Start
	I0911 11:55:31.514451 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Ensuring networks are active...
	I0911 11:55:31.515362 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Ensuring network default is active
	I0911 11:55:31.515727 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Ensuring network minikube-net is active
	I0911 11:55:31.516158 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Getting domain xml...
	I0911 11:55:31.517182 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Creating domain...
	I0911 11:55:32.824588 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Waiting to get IP...
	I0911 11:55:32.825804 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:32.826318 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:32.826423 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:32.826311 2250870 retry.go:31] will retry after 229.805205ms: waiting for machine to come up
	I0911 11:55:33.057976 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:33.058588 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:33.058626 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:33.058540 2250870 retry.go:31] will retry after 246.175963ms: waiting for machine to come up
	I0911 11:55:33.306371 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:33.307021 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:33.307068 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:33.306941 2250870 retry.go:31] will retry after 377.21923ms: waiting for machine to come up
	I0911 11:55:33.685585 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:33.686133 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:33.686167 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:33.686082 2250870 retry.go:31] will retry after 381.161501ms: waiting for machine to come up
	I0911 11:55:34.068725 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:34.069231 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:34.069266 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:34.069172 2250870 retry.go:31] will retry after 722.992084ms: waiting for machine to come up
	I0911 11:55:34.794514 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:34.794990 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:34.795062 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:34.794957 2250870 retry.go:31] will retry after 638.848955ms: waiting for machine to come up
	I0911 11:55:35.436110 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:35.436745 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:35.436776 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:35.436690 2250870 retry.go:31] will retry after 1.115146293s: waiting for machine to come up
	I0911 11:55:36.553784 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:36.554306 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:36.554335 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:36.554274 2250870 retry.go:31] will retry after 1.136620458s: waiting for machine to come up
	I0911 11:55:37.692166 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:37.692686 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:37.692716 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:37.692638 2250870 retry.go:31] will retry after 1.311085397s: waiting for machine to come up
	I0911 11:55:39.006163 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:39.006589 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:39.006631 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:39.006545 2250870 retry.go:31] will retry after 2.155691642s: waiting for machine to come up
	I0911 11:55:41.165096 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:41.165575 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:41.165607 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:41.165498 2250870 retry.go:31] will retry after 1.974129804s: waiting for machine to come up
	I0911 11:55:43.141976 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:43.142426 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:43.142456 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:43.142384 2250870 retry.go:31] will retry after 3.158208785s: waiting for machine to come up
	I0911 11:55:46.302087 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:46.302617 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:46.302652 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:46.302535 2250870 retry.go:31] will retry after 3.330397198s: waiting for machine to come up
	I0911 11:55:49.637549 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:49.638033 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:49.638074 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:49.637975 2250870 retry.go:31] will retry after 5.239913831s: waiting for machine to come up
	I0911 11:55:54.882605 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:55:54.883070 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:55:54.883115 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:55:54.883019 2250870 retry.go:31] will retry after 6.564511204s: waiting for machine to come up
	I0911 11:56:01.450075 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:01.450534 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | unable to find current IP address of domain stopped-upgrade-715426 in network minikube-net
	I0911 11:56:01.450561 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | I0911 11:56:01.450490 2250870 retry.go:31] will retry after 6.2579409s: waiting for machine to come up
	I0911 11:56:07.709895 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:07.710428 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has current primary IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:07.710453 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Found IP for machine: 192.168.50.66
	I0911 11:56:07.710463 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Reserving static IP address...
	I0911 11:56:07.710957 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "stopped-upgrade-715426", mac: "52:54:00:64:ed:b5", ip: "192.168.50.66"} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:07.711013 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | skip adding static IP to network minikube-net - found existing host DHCP lease matching {name: "stopped-upgrade-715426", mac: "52:54:00:64:ed:b5", ip: "192.168.50.66"}
	I0911 11:56:07.711037 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Reserved static IP address: 192.168.50.66
	I0911 11:56:07.711056 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Waiting for SSH to be available...
	I0911 11:56:07.711079 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | Getting to WaitForSSH function...
	I0911 11:56:07.713298 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:07.713618 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:07.713650 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:07.713849 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | Using SSH client type: external
	I0911 11:56:07.713882 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/stopped-upgrade-715426/id_rsa (-rw-------)
	I0911 11:56:07.713960 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.66 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/stopped-upgrade-715426/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 11:56:07.714003 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | About to run SSH command:
	I0911 11:56:07.714021 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | exit 0
	I0911 11:56:07.840976 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | SSH cmd err, output: <nil>: 
	I0911 11:56:07.841380 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetConfigRaw
	I0911 11:56:07.842173 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetIP
	I0911 11:56:07.845629 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:07.846229 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:07.846273 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:07.846537 2250323 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/stopped-upgrade-715426/config.json ...
	I0911 11:56:07.846802 2250323 machine.go:88] provisioning docker machine ...
	I0911 11:56:07.846831 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .DriverName
	I0911 11:56:07.847114 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetMachineName
	I0911 11:56:07.847322 2250323 buildroot.go:166] provisioning hostname "stopped-upgrade-715426"
	I0911 11:56:07.847344 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetMachineName
	I0911 11:56:07.847497 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHHostname
	I0911 11:56:07.850186 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:07.850572 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:07.850620 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:07.850808 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHPort
	I0911 11:56:07.851034 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHKeyPath
	I0911 11:56:07.851183 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHKeyPath
	I0911 11:56:07.851347 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHUsername
	I0911 11:56:07.851573 2250323 main.go:141] libmachine: Using SSH client type: native
	I0911 11:56:07.852098 2250323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I0911 11:56:07.852127 2250323 main.go:141] libmachine: About to run SSH command:
	sudo hostname stopped-upgrade-715426 && echo "stopped-upgrade-715426" | sudo tee /etc/hostname
	I0911 11:56:07.963937 2250323 main.go:141] libmachine: SSH cmd err, output: <nil>: stopped-upgrade-715426
	
	I0911 11:56:07.963974 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHHostname
	I0911 11:56:07.967150 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:07.967533 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:07.967576 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:07.967873 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHPort
	I0911 11:56:07.968110 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHKeyPath
	I0911 11:56:07.968288 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHKeyPath
	I0911 11:56:07.968468 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHUsername
	I0911 11:56:07.968668 2250323 main.go:141] libmachine: Using SSH client type: native
	I0911 11:56:07.969107 2250323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I0911 11:56:07.969137 2250323 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sstopped-upgrade-715426' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 stopped-upgrade-715426/g' /etc/hosts;
				else 
					echo '127.0.1.1 stopped-upgrade-715426' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:56:08.077358 2250323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:56:08.077388 2250323 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 11:56:08.077421 2250323 buildroot.go:174] setting up certificates
	I0911 11:56:08.077431 2250323 provision.go:83] configureAuth start
	I0911 11:56:08.077447 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetMachineName
	I0911 11:56:08.077826 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetIP
	I0911 11:56:08.080740 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:08.081135 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:08.081168 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:08.081342 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHHostname
	I0911 11:56:08.083551 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:08.083886 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:08.083909 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:08.084054 2250323 provision.go:138] copyHostCerts
	I0911 11:56:08.084119 2250323 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 11:56:08.084130 2250323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:56:08.084195 2250323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 11:56:08.084315 2250323 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 11:56:08.084326 2250323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:56:08.084353 2250323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 11:56:08.084405 2250323 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 11:56:08.084412 2250323 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:56:08.084432 2250323 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 11:56:08.084477 2250323 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.stopped-upgrade-715426 san=[192.168.50.66 192.168.50.66 localhost 127.0.0.1 minikube stopped-upgrade-715426]
	I0911 11:56:08.358251 2250323 provision.go:172] copyRemoteCerts
	I0911 11:56:08.358317 2250323 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:56:08.358345 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHHostname
	I0911 11:56:08.361187 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:08.361519 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:08.361557 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:08.361778 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHPort
	I0911 11:56:08.361994 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHKeyPath
	I0911 11:56:08.362180 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHUsername
	I0911 11:56:08.362331 2250323 sshutil.go:53] new ssh client: &{IP:192.168.50.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/stopped-upgrade-715426/id_rsa Username:docker}
	I0911 11:56:08.443783 2250323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:56:08.458776 2250323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0911 11:56:08.473463 2250323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 11:56:08.487810 2250323 provision.go:86] duration metric: configureAuth took 410.33915ms
	I0911 11:56:08.487843 2250323 buildroot.go:189] setting minikube options for container-runtime
	I0911 11:56:08.488068 2250323 config.go:182] Loaded profile config "stopped-upgrade-715426": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0911 11:56:08.488174 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHHostname
	I0911 11:56:08.490884 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:08.491251 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:08.491298 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:08.491430 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHPort
	I0911 11:56:08.491665 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHKeyPath
	I0911 11:56:08.491859 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHKeyPath
	I0911 11:56:08.491993 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHUsername
	I0911 11:56:08.492179 2250323 main.go:141] libmachine: Using SSH client type: native
	I0911 11:56:08.492653 2250323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I0911 11:56:08.492672 2250323 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:56:14.284499 2250323 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:56:14.284535 2250323 machine.go:91] provisioned docker machine in 6.437717119s
	I0911 11:56:14.284548 2250323 start.go:300] post-start starting for "stopped-upgrade-715426" (driver="kvm2")
	I0911 11:56:14.284598 2250323 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:56:14.284628 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .DriverName
	I0911 11:56:14.285017 2250323 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:56:14.285055 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHHostname
	I0911 11:56:14.287799 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:14.288241 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:14.288277 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:14.288422 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHPort
	I0911 11:56:14.288639 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHKeyPath
	I0911 11:56:14.288859 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHUsername
	I0911 11:56:14.289005 2250323 sshutil.go:53] new ssh client: &{IP:192.168.50.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/stopped-upgrade-715426/id_rsa Username:docker}
	I0911 11:56:14.370528 2250323 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:56:14.375327 2250323 info.go:137] Remote host: Buildroot 2019.02.7
	I0911 11:56:14.375361 2250323 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 11:56:14.375461 2250323 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 11:56:14.375564 2250323 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 11:56:14.375688 2250323 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:56:14.382041 2250323 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:56:14.398825 2250323 start.go:303] post-start completed in 114.260451ms
	I0911 11:56:14.398857 2250323 fix.go:56] fixHost completed within 42.910702482s
	I0911 11:56:14.398889 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHHostname
	I0911 11:56:14.402081 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:14.402459 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:14.402509 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:14.402746 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHPort
	I0911 11:56:14.402984 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHKeyPath
	I0911 11:56:14.403149 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHKeyPath
	I0911 11:56:14.403296 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHUsername
	I0911 11:56:14.403448 2250323 main.go:141] libmachine: Using SSH client type: native
	I0911 11:56:14.403949 2250323 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.66 22 <nil> <nil>}
	I0911 11:56:14.403966 2250323 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0911 11:56:14.509370 2250323 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694433374.462657041
	
	I0911 11:56:14.509395 2250323 fix.go:206] guest clock: 1694433374.462657041
	I0911 11:56:14.509405 2250323 fix.go:219] Guest: 2023-09-11 11:56:14.462657041 +0000 UTC Remote: 2023-09-11 11:56:14.39886149 +0000 UTC m=+68.029446849 (delta=63.795551ms)
	I0911 11:56:14.509433 2250323 fix.go:190] guest clock delta is within tolerance: 63.795551ms
	I0911 11:56:14.509440 2250323 start.go:83] releasing machines lock for "stopped-upgrade-715426", held for 43.021322277s
	I0911 11:56:14.509472 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .DriverName
	I0911 11:56:14.509799 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetIP
	I0911 11:56:14.512778 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:14.513278 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:14.513319 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:14.513526 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .DriverName
	I0911 11:56:14.514150 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .DriverName
	I0911 11:56:14.514378 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .DriverName
	I0911 11:56:14.514475 2250323 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:56:14.514542 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHHostname
	I0911 11:56:14.514606 2250323 ssh_runner.go:195] Run: cat /version.json
	I0911 11:56:14.514634 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHHostname
	I0911 11:56:14.517583 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:14.517906 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:14.518090 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:14.518144 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:14.518400 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:64:ed:b5", ip: ""} in network minikube-net: {Iface:virbr2 ExpiryTime:2023-09-11 12:51:38 +0000 UTC Type:0 Mac:52:54:00:64:ed:b5 Iaid: IPaddr:192.168.50.66 Prefix:24 Hostname:stopped-upgrade-715426 Clientid:01:52:54:00:64:ed:b5}
	I0911 11:56:14.518459 2250323 main.go:141] libmachine: (stopped-upgrade-715426) DBG | domain stopped-upgrade-715426 has defined IP address 192.168.50.66 and MAC address 52:54:00:64:ed:b5 in network minikube-net
	I0911 11:56:14.518421 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHPort
	I0911 11:56:14.518577 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHPort
	I0911 11:56:14.518789 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHKeyPath
	I0911 11:56:14.518805 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHKeyPath
	I0911 11:56:14.518983 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHUsername
	I0911 11:56:14.518988 2250323 main.go:141] libmachine: (stopped-upgrade-715426) Calling .GetSSHUsername
	I0911 11:56:14.519181 2250323 sshutil.go:53] new ssh client: &{IP:192.168.50.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/stopped-upgrade-715426/id_rsa Username:docker}
	I0911 11:56:14.519186 2250323 sshutil.go:53] new ssh client: &{IP:192.168.50.66 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/stopped-upgrade-715426/id_rsa Username:docker}
	W0911 11:56:14.618253 2250323 start.go:419] Unable to open version.json: cat /version.json: Process exited with status 1
	stdout:
	
	stderr:
	cat: /version.json: No such file or directory
	I0911 11:56:14.618349 2250323 ssh_runner.go:195] Run: systemctl --version
	I0911 11:56:14.623321 2250323 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:56:14.823541 2250323 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 11:56:14.830782 2250323 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 11:56:14.830907 2250323 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:56:14.839396 2250323 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0911 11:56:14.839432 2250323 start.go:466] detecting cgroup driver to use...
	I0911 11:56:14.839516 2250323 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:56:14.852362 2250323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:56:14.863975 2250323 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:56:14.864051 2250323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:56:14.873345 2250323 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:56:14.883214 2250323 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	W0911 11:56:14.891650 2250323 docker.go:206] Failed to disable socket "cri-docker.socket" (might be ok): sudo systemctl disable cri-docker.socket: Process exited with status 1
	stdout:
	
	stderr:
	Failed to disable unit: Unit file cri-docker.socket does not exist.
	I0911 11:56:14.891737 2250323 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:56:14.978994 2250323 docker.go:212] disabling docker service ...
	I0911 11:56:14.979086 2250323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:56:14.991303 2250323 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:56:14.999604 2250323 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:56:15.093595 2250323 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:56:15.189922 2250323 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:56:15.201264 2250323 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:56:15.216029 2250323 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0911 11:56:15.216102 2250323 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:56:15.226047 2250323 out.go:177] 
	W0911 11:56:15.227734 2250323 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: update pause_image: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf": Process exited with status 1
	stdout:
	
	stderr:
	sed: /etc/crio/crio.conf.d/02-crio.conf: No such file or directory
	
	W0911 11:56:15.227763 2250323 out.go:239] * 
	* 
	W0911 11:56:15.228868 2250323 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 11:56:15.230702 2250323 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:213: upgrade from v1.6.2 to HEAD failed: out/minikube-linux-amd64 start -p stopped-upgrade-715426 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 90
--- FAIL: TestStoppedBinaryUpgrade/Upgrade (316.10s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (78.09s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-474712 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-474712 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m14.129259338s)
pause_test.go:100: expected the second start log output to include "The running cluster does not require reconfiguration" but got: 
-- stdout --
	* [pause-474712] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	* Starting control plane node pause-474712 in cluster pause-474712
	* Updating the running kvm2 "pause-474712" VM ...
	* Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	* Configuring bridge CNI (Container Networking Interface) ...
	* Enabled addons: 
	* Verifying Kubernetes components...
	* Done! kubectl is now configured to use "pause-474712" cluster and "default" namespace by default

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:55:04.666882 2250272 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:55:04.667030 2250272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:55:04.667041 2250272 out.go:309] Setting ErrFile to fd 2...
	I0911 11:55:04.667048 2250272 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:55:04.667287 2250272 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:55:04.667979 2250272 out.go:303] Setting JSON to false
	I0911 11:55:04.669150 2250272 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":236256,"bootTime":1694197049,"procs":247,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:55:04.669236 2250272 start.go:138] virtualization: kvm guest
	I0911 11:55:04.672787 2250272 out.go:177] * [pause-474712] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:55:04.675838 2250272 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:55:04.675873 2250272 notify.go:220] Checking for updates...
	I0911 11:55:04.677817 2250272 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:55:04.679604 2250272 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:55:04.681408 2250272 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:55:04.683306 2250272 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:55:04.685101 2250272 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:55:04.687386 2250272 config.go:182] Loaded profile config "pause-474712": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:55:04.688804 2250272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:55:04.688895 2250272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:55:04.710972 2250272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33421
	I0911 11:55:04.711461 2250272 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:55:04.712165 2250272 main.go:141] libmachine: Using API Version  1
	I0911 11:55:04.712196 2250272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:55:04.712671 2250272 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:55:04.712908 2250272 main.go:141] libmachine: (pause-474712) Calling .DriverName
	I0911 11:55:04.713187 2250272 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:55:04.713521 2250272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:55:04.713563 2250272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:55:04.731369 2250272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36535
	I0911 11:55:04.731825 2250272 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:55:04.732505 2250272 main.go:141] libmachine: Using API Version  1
	I0911 11:55:04.732539 2250272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:55:04.732965 2250272 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:55:04.733196 2250272 main.go:141] libmachine: (pause-474712) Calling .DriverName
	I0911 11:55:04.776359 2250272 out.go:177] * Using the kvm2 driver based on existing profile
	I0911 11:55:04.777989 2250272 start.go:298] selected driver: kvm2
	I0911 11:55:04.778009 2250272 start.go:902] validating driver "kvm2" against &{Name:pause-474712 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kub
ernetesVersion:v1.28.1 ClusterName:pause-474712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.254 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false ol
m:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:55:04.778220 2250272 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:55:04.778688 2250272 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:55:04.778788 2250272 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 11:55:04.800091 2250272 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 11:55:04.801130 2250272 cni.go:84] Creating CNI manager for ""
	I0911 11:55:04.801148 2250272 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 11:55:04.801160 2250272 start_flags.go:321] config:
	{Name:pause-474712 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:pause-474712 Namespace:default APIServerNam
e:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.254 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-alia
ses:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:55:04.801435 2250272 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:55:04.804774 2250272 out.go:177] * Starting control plane node pause-474712 in cluster pause-474712
	I0911 11:55:04.806697 2250272 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:55:04.806794 2250272 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 11:55:04.806818 2250272 cache.go:57] Caching tarball of preloaded images
	I0911 11:55:04.806948 2250272 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 11:55:04.806964 2250272 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 11:55:04.807166 2250272 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/pause-474712/config.json ...
	I0911 11:55:04.807435 2250272 start.go:365] acquiring machines lock for pause-474712: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 11:55:23.998360 2250272 start.go:369] acquired machines lock for "pause-474712" in 19.190892974s
	I0911 11:55:23.998422 2250272 start.go:96] Skipping create...Using existing machine configuration
	I0911 11:55:23.998443 2250272 fix.go:54] fixHost starting: 
	I0911 11:55:23.998934 2250272 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:55:23.998999 2250272 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:55:24.018169 2250272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44945
	I0911 11:55:24.018695 2250272 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:55:24.019403 2250272 main.go:141] libmachine: Using API Version  1
	I0911 11:55:24.019431 2250272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:55:24.019809 2250272 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:55:24.020029 2250272 main.go:141] libmachine: (pause-474712) Calling .DriverName
	I0911 11:55:24.020198 2250272 main.go:141] libmachine: (pause-474712) Calling .GetState
	I0911 11:55:24.022123 2250272 fix.go:102] recreateIfNeeded on pause-474712: state=Running err=<nil>
	W0911 11:55:24.022160 2250272 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 11:55:24.024212 2250272 out.go:177] * Updating the running kvm2 "pause-474712" VM ...
	I0911 11:55:24.026218 2250272 machine.go:88] provisioning docker machine ...
	I0911 11:55:24.026255 2250272 main.go:141] libmachine: (pause-474712) Calling .DriverName
	I0911 11:55:24.026666 2250272 main.go:141] libmachine: (pause-474712) Calling .GetMachineName
	I0911 11:55:24.026850 2250272 buildroot.go:166] provisioning hostname "pause-474712"
	I0911 11:55:24.026868 2250272 main.go:141] libmachine: (pause-474712) Calling .GetMachineName
	I0911 11:55:24.027026 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHHostname
	I0911 11:55:24.030355 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:24.030912 2250272 main.go:141] libmachine: (pause-474712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9b:dd", ip: ""} in network mk-pause-474712: {Iface:virbr3 ExpiryTime:2023-09-11 12:53:38 +0000 UTC Type:0 Mac:52:54:00:4c:9b:dd Iaid: IPaddr:192.168.72.254 Prefix:24 Hostname:pause-474712 Clientid:01:52:54:00:4c:9b:dd}
	I0911 11:55:24.030941 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined IP address 192.168.72.254 and MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:24.031137 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHPort
	I0911 11:55:24.031358 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHKeyPath
	I0911 11:55:24.031548 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHKeyPath
	I0911 11:55:24.031760 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHUsername
	I0911 11:55:24.031996 2250272 main.go:141] libmachine: Using SSH client type: native
	I0911 11:55:24.032644 2250272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.254 22 <nil> <nil>}
	I0911 11:55:24.032674 2250272 main.go:141] libmachine: About to run SSH command:
	sudo hostname pause-474712 && echo "pause-474712" | sudo tee /etc/hostname
	I0911 11:55:24.187334 2250272 main.go:141] libmachine: SSH cmd err, output: <nil>: pause-474712
	
	I0911 11:55:24.187368 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHHostname
	I0911 11:55:24.191195 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:24.191678 2250272 main.go:141] libmachine: (pause-474712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9b:dd", ip: ""} in network mk-pause-474712: {Iface:virbr3 ExpiryTime:2023-09-11 12:53:38 +0000 UTC Type:0 Mac:52:54:00:4c:9b:dd Iaid: IPaddr:192.168.72.254 Prefix:24 Hostname:pause-474712 Clientid:01:52:54:00:4c:9b:dd}
	I0911 11:55:24.191713 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined IP address 192.168.72.254 and MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:24.191991 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHPort
	I0911 11:55:24.192245 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHKeyPath
	I0911 11:55:24.192434 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHKeyPath
	I0911 11:55:24.192591 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHUsername
	I0911 11:55:24.192840 2250272 main.go:141] libmachine: Using SSH client type: native
	I0911 11:55:24.193323 2250272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.254 22 <nil> <nil>}
	I0911 11:55:24.193347 2250272 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\spause-474712' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 pause-474712/g' /etc/hosts;
				else 
					echo '127.0.1.1 pause-474712' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 11:55:24.316883 2250272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 11:55:24.316923 2250272 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 11:55:24.316959 2250272 buildroot.go:174] setting up certificates
	I0911 11:55:24.316970 2250272 provision.go:83] configureAuth start
	I0911 11:55:24.316990 2250272 main.go:141] libmachine: (pause-474712) Calling .GetMachineName
	I0911 11:55:24.317366 2250272 main.go:141] libmachine: (pause-474712) Calling .GetIP
	I0911 11:55:24.320988 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:24.321487 2250272 main.go:141] libmachine: (pause-474712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9b:dd", ip: ""} in network mk-pause-474712: {Iface:virbr3 ExpiryTime:2023-09-11 12:53:38 +0000 UTC Type:0 Mac:52:54:00:4c:9b:dd Iaid: IPaddr:192.168.72.254 Prefix:24 Hostname:pause-474712 Clientid:01:52:54:00:4c:9b:dd}
	I0911 11:55:24.321529 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined IP address 192.168.72.254 and MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:24.321863 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHHostname
	I0911 11:55:24.324882 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:24.325290 2250272 main.go:141] libmachine: (pause-474712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9b:dd", ip: ""} in network mk-pause-474712: {Iface:virbr3 ExpiryTime:2023-09-11 12:53:38 +0000 UTC Type:0 Mac:52:54:00:4c:9b:dd Iaid: IPaddr:192.168.72.254 Prefix:24 Hostname:pause-474712 Clientid:01:52:54:00:4c:9b:dd}
	I0911 11:55:24.325370 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined IP address 192.168.72.254 and MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:24.325519 2250272 provision.go:138] copyHostCerts
	I0911 11:55:24.325611 2250272 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 11:55:24.325628 2250272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 11:55:24.325694 2250272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 11:55:24.325796 2250272 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 11:55:24.325805 2250272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 11:55:24.325825 2250272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 11:55:24.325899 2250272 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 11:55:24.325910 2250272 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 11:55:24.325931 2250272 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 11:55:24.325975 2250272 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.pause-474712 san=[192.168.72.254 192.168.72.254 localhost 127.0.0.1 minikube pause-474712]
	I0911 11:55:24.476466 2250272 provision.go:172] copyRemoteCerts
	I0911 11:55:24.476524 2250272 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 11:55:24.476558 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHHostname
	I0911 11:55:24.479877 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:24.480383 2250272 main.go:141] libmachine: (pause-474712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9b:dd", ip: ""} in network mk-pause-474712: {Iface:virbr3 ExpiryTime:2023-09-11 12:53:38 +0000 UTC Type:0 Mac:52:54:00:4c:9b:dd Iaid: IPaddr:192.168.72.254 Prefix:24 Hostname:pause-474712 Clientid:01:52:54:00:4c:9b:dd}
	I0911 11:55:24.480437 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined IP address 192.168.72.254 and MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:24.480642 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHPort
	I0911 11:55:24.480931 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHKeyPath
	I0911 11:55:24.481146 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHUsername
	I0911 11:55:24.481325 2250272 sshutil.go:53] new ssh client: &{IP:192.168.72.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/pause-474712/id_rsa Username:docker}
	I0911 11:55:24.570978 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 11:55:24.601718 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0911 11:55:24.638169 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 11:55:24.677622 2250272 provision.go:86] duration metric: configureAuth took 360.63047ms
	I0911 11:55:24.677658 2250272 buildroot.go:189] setting minikube options for container-runtime
	I0911 11:55:24.677929 2250272 config.go:182] Loaded profile config "pause-474712": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:55:24.678034 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHHostname
	I0911 11:55:24.681554 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:24.682018 2250272 main.go:141] libmachine: (pause-474712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9b:dd", ip: ""} in network mk-pause-474712: {Iface:virbr3 ExpiryTime:2023-09-11 12:53:38 +0000 UTC Type:0 Mac:52:54:00:4c:9b:dd Iaid: IPaddr:192.168.72.254 Prefix:24 Hostname:pause-474712 Clientid:01:52:54:00:4c:9b:dd}
	I0911 11:55:24.682057 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined IP address 192.168.72.254 and MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:24.682406 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHPort
	I0911 11:55:24.682674 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHKeyPath
	I0911 11:55:24.682907 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHKeyPath
	I0911 11:55:24.683093 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHUsername
	I0911 11:55:24.683306 2250272 main.go:141] libmachine: Using SSH client type: native
	I0911 11:55:24.683932 2250272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.254 22 <nil> <nil>}
	I0911 11:55:24.683959 2250272 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 11:55:30.826400 2250272 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 11:55:30.826436 2250272 machine.go:91] provisioned docker machine in 6.800195401s
	I0911 11:55:30.826448 2250272 start.go:300] post-start starting for "pause-474712" (driver="kvm2")
	I0911 11:55:30.826469 2250272 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 11:55:30.826502 2250272 main.go:141] libmachine: (pause-474712) Calling .DriverName
	I0911 11:55:30.826915 2250272 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 11:55:30.826952 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHHostname
	I0911 11:55:30.829964 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:30.830529 2250272 main.go:141] libmachine: (pause-474712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9b:dd", ip: ""} in network mk-pause-474712: {Iface:virbr3 ExpiryTime:2023-09-11 12:53:38 +0000 UTC Type:0 Mac:52:54:00:4c:9b:dd Iaid: IPaddr:192.168.72.254 Prefix:24 Hostname:pause-474712 Clientid:01:52:54:00:4c:9b:dd}
	I0911 11:55:30.830565 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined IP address 192.168.72.254 and MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:30.830752 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHPort
	I0911 11:55:30.830975 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHKeyPath
	I0911 11:55:30.831166 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHUsername
	I0911 11:55:30.831374 2250272 sshutil.go:53] new ssh client: &{IP:192.168.72.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/pause-474712/id_rsa Username:docker}
	I0911 11:55:31.120050 2250272 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 11:55:31.157418 2250272 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 11:55:31.157451 2250272 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 11:55:31.157550 2250272 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 11:55:31.157649 2250272 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 11:55:31.157765 2250272 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 11:55:31.202326 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:55:31.303063 2250272 start.go:303] post-start completed in 476.572384ms
	I0911 11:55:31.303100 2250272 fix.go:56] fixHost completed within 7.304667168s
	I0911 11:55:31.303133 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHHostname
	I0911 11:55:31.306196 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:31.306623 2250272 main.go:141] libmachine: (pause-474712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9b:dd", ip: ""} in network mk-pause-474712: {Iface:virbr3 ExpiryTime:2023-09-11 12:53:38 +0000 UTC Type:0 Mac:52:54:00:4c:9b:dd Iaid: IPaddr:192.168.72.254 Prefix:24 Hostname:pause-474712 Clientid:01:52:54:00:4c:9b:dd}
	I0911 11:55:31.306649 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined IP address 192.168.72.254 and MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:31.306858 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHPort
	I0911 11:55:31.307101 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHKeyPath
	I0911 11:55:31.307303 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHKeyPath
	I0911 11:55:31.307452 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHUsername
	I0911 11:55:31.307649 2250272 main.go:141] libmachine: Using SSH client type: native
	I0911 11:55:31.308110 2250272 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.254 22 <nil> <nil>}
	I0911 11:55:31.308124 2250272 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0911 11:55:31.487907 2250272 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694433331.484740023
	
	I0911 11:55:31.487936 2250272 fix.go:206] guest clock: 1694433331.484740023
	I0911 11:55:31.487947 2250272 fix.go:219] Guest: 2023-09-11 11:55:31.484740023 +0000 UTC Remote: 2023-09-11 11:55:31.303104404 +0000 UTC m=+26.676129106 (delta=181.635619ms)
	I0911 11:55:31.487975 2250272 fix.go:190] guest clock delta is within tolerance: 181.635619ms
	I0911 11:55:31.487982 2250272 start.go:83] releasing machines lock for "pause-474712", held for 7.489592178s
	I0911 11:55:31.488018 2250272 main.go:141] libmachine: (pause-474712) Calling .DriverName
	I0911 11:55:31.488346 2250272 main.go:141] libmachine: (pause-474712) Calling .GetIP
	I0911 11:55:31.491259 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:31.491696 2250272 main.go:141] libmachine: (pause-474712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9b:dd", ip: ""} in network mk-pause-474712: {Iface:virbr3 ExpiryTime:2023-09-11 12:53:38 +0000 UTC Type:0 Mac:52:54:00:4c:9b:dd Iaid: IPaddr:192.168.72.254 Prefix:24 Hostname:pause-474712 Clientid:01:52:54:00:4c:9b:dd}
	I0911 11:55:31.491729 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined IP address 192.168.72.254 and MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:31.491924 2250272 main.go:141] libmachine: (pause-474712) Calling .DriverName
	I0911 11:55:31.492596 2250272 main.go:141] libmachine: (pause-474712) Calling .DriverName
	I0911 11:55:31.492837 2250272 main.go:141] libmachine: (pause-474712) Calling .DriverName
	I0911 11:55:31.492952 2250272 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 11:55:31.493011 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHHostname
	I0911 11:55:31.493115 2250272 ssh_runner.go:195] Run: cat /version.json
	I0911 11:55:31.493143 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHHostname
	I0911 11:55:31.496314 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:31.496461 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:31.496734 2250272 main.go:141] libmachine: (pause-474712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9b:dd", ip: ""} in network mk-pause-474712: {Iface:virbr3 ExpiryTime:2023-09-11 12:53:38 +0000 UTC Type:0 Mac:52:54:00:4c:9b:dd Iaid: IPaddr:192.168.72.254 Prefix:24 Hostname:pause-474712 Clientid:01:52:54:00:4c:9b:dd}
	I0911 11:55:31.496773 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined IP address 192.168.72.254 and MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:31.496798 2250272 main.go:141] libmachine: (pause-474712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9b:dd", ip: ""} in network mk-pause-474712: {Iface:virbr3 ExpiryTime:2023-09-11 12:53:38 +0000 UTC Type:0 Mac:52:54:00:4c:9b:dd Iaid: IPaddr:192.168.72.254 Prefix:24 Hostname:pause-474712 Clientid:01:52:54:00:4c:9b:dd}
	I0911 11:55:31.496872 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined IP address 192.168.72.254 and MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:31.496952 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHPort
	I0911 11:55:31.497174 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHKeyPath
	I0911 11:55:31.497194 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHPort
	I0911 11:55:31.497400 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHKeyPath
	I0911 11:55:31.497400 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHUsername
	I0911 11:55:31.497537 2250272 sshutil.go:53] new ssh client: &{IP:192.168.72.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/pause-474712/id_rsa Username:docker}
	I0911 11:55:31.497613 2250272 main.go:141] libmachine: (pause-474712) Calling .GetSSHUsername
	I0911 11:55:31.497759 2250272 sshutil.go:53] new ssh client: &{IP:192.168.72.254 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/pause-474712/id_rsa Username:docker}
	I0911 11:55:31.625219 2250272 ssh_runner.go:195] Run: systemctl --version
	I0911 11:55:31.649407 2250272 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 11:55:31.854749 2250272 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 11:55:31.867005 2250272 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 11:55:31.867119 2250272 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 11:55:31.887097 2250272 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0911 11:55:31.887133 2250272 start.go:466] detecting cgroup driver to use...
	I0911 11:55:31.887213 2250272 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 11:55:31.911897 2250272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 11:55:31.937577 2250272 docker.go:196] disabling cri-docker service (if available) ...
	I0911 11:55:31.937659 2250272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 11:55:31.963946 2250272 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 11:55:31.997779 2250272 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 11:55:32.352106 2250272 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 11:55:32.656775 2250272 docker.go:212] disabling docker service ...
	I0911 11:55:32.656898 2250272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 11:55:32.689394 2250272 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 11:55:32.720876 2250272 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 11:55:33.009463 2250272 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 11:55:33.360194 2250272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 11:55:33.392355 2250272 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 11:55:33.446929 2250272 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 11:55:33.447008 2250272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:55:33.476293 2250272 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 11:55:33.476388 2250272 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:55:33.501733 2250272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:55:33.544769 2250272 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 11:55:33.567419 2250272 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 11:55:33.597216 2250272 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 11:55:33.615879 2250272 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 11:55:33.635427 2250272 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 11:55:33.882778 2250272 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 11:55:35.527828 2250272 ssh_runner.go:235] Completed: sudo systemctl restart crio: (1.64500352s)
	I0911 11:55:35.527875 2250272 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 11:55:35.527957 2250272 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 11:55:35.536351 2250272 start.go:534] Will wait 60s for crictl version
	I0911 11:55:35.536462 2250272 ssh_runner.go:195] Run: which crictl
	I0911 11:55:35.545971 2250272 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 11:55:35.591460 2250272 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 11:55:35.591568 2250272 ssh_runner.go:195] Run: crio --version
	I0911 11:55:35.644129 2250272 ssh_runner.go:195] Run: crio --version
	I0911 11:55:35.696890 2250272 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 11:55:35.698543 2250272 main.go:141] libmachine: (pause-474712) Calling .GetIP
	I0911 11:55:35.701521 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:35.701870 2250272 main.go:141] libmachine: (pause-474712) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4c:9b:dd", ip: ""} in network mk-pause-474712: {Iface:virbr3 ExpiryTime:2023-09-11 12:53:38 +0000 UTC Type:0 Mac:52:54:00:4c:9b:dd Iaid: IPaddr:192.168.72.254 Prefix:24 Hostname:pause-474712 Clientid:01:52:54:00:4c:9b:dd}
	I0911 11:55:35.701906 2250272 main.go:141] libmachine: (pause-474712) DBG | domain pause-474712 has defined IP address 192.168.72.254 and MAC address 52:54:00:4c:9b:dd in network mk-pause-474712
	I0911 11:55:35.702089 2250272 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0911 11:55:35.706836 2250272 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 11:55:35.706899 2250272 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:55:35.749700 2250272 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:55:35.749727 2250272 crio.go:415] Images already preloaded, skipping extraction
	I0911 11:55:35.749787 2250272 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 11:55:35.782948 2250272 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 11:55:35.782970 2250272 cache_images.go:84] Images are preloaded, skipping loading
	I0911 11:55:35.783055 2250272 ssh_runner.go:195] Run: crio config
	I0911 11:55:35.854232 2250272 cni.go:84] Creating CNI manager for ""
	I0911 11:55:35.854269 2250272 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 11:55:35.854292 2250272 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 11:55:35.854321 2250272 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.254 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:pause-474712 NodeName:pause-474712 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.254"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.254 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kub
ernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 11:55:35.854521 2250272 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.254
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "pause-474712"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.254
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.254"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 11:55:35.854627 2250272 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=pause-474712 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.254
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:pause-474712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 11:55:35.854707 2250272 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 11:55:35.865049 2250272 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 11:55:35.865172 2250272 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 11:55:35.874770 2250272 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (372 bytes)
	I0911 11:55:35.893347 2250272 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 11:55:35.910914 2250272 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2099 bytes)
	I0911 11:55:35.929382 2250272 ssh_runner.go:195] Run: grep 192.168.72.254	control-plane.minikube.internal$ /etc/hosts
	I0911 11:55:35.933477 2250272 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/pause-474712 for IP: 192.168.72.254
	I0911 11:55:35.933541 2250272 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 11:55:35.933732 2250272 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 11:55:35.933798 2250272 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 11:55:35.933892 2250272 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/pause-474712/client.key
	I0911 11:55:35.933970 2250272 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/pause-474712/apiserver.key.d7f2e1c2
	I0911 11:55:35.934015 2250272 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/pause-474712/proxy-client.key
	I0911 11:55:35.934158 2250272 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 11:55:35.934206 2250272 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 11:55:35.934222 2250272 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 11:55:35.934270 2250272 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 11:55:35.934304 2250272 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 11:55:35.934336 2250272 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 11:55:35.934392 2250272 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 11:55:35.935987 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/pause-474712/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 11:55:36.264790 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/pause-474712/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 11:55:36.391908 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/pause-474712/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 11:55:36.456461 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/pause-474712/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 11:55:36.509719 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 11:55:36.540470 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 11:55:36.581219 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 11:55:36.616959 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 11:55:36.652485 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 11:55:36.682560 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 11:55:36.718439 2250272 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 11:55:36.752018 2250272 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 11:55:36.774365 2250272 ssh_runner.go:195] Run: openssl version
	I0911 11:55:36.784870 2250272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 11:55:36.799743 2250272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 11:55:36.807036 2250272 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 11:55:36.807114 2250272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 11:55:36.816012 2250272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 11:55:36.833546 2250272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 11:55:36.852994 2250272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:55:36.863792 2250272 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:55:36.863876 2250272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 11:55:36.876455 2250272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 11:55:36.888738 2250272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 11:55:36.903808 2250272 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 11:55:36.910635 2250272 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 11:55:36.910715 2250272 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 11:55:36.923430 2250272 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 11:55:36.946082 2250272 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 11:55:36.957050 2250272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 11:55:36.969025 2250272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 11:55:36.980369 2250272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 11:55:36.993894 2250272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 11:55:37.022356 2250272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 11:55:37.037560 2250272 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 11:55:37.048339 2250272 kubeadm.go:404] StartCluster: {Name:pause-474712 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.1 ClusterName:pause-474712 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.254 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-securi
ty-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false volumesnapshots:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:55:37.048517 2250272 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 11:55:37.048586 2250272 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 11:55:37.104763 2250272 cri.go:89] found id: "40f009f62b9de595d9fb8dac01a37a316feec6c142adacd0d0c97f579ab2ee8b"
	I0911 11:55:37.104790 2250272 cri.go:89] found id: "00230d1c8c0daf7a9f430d981fbb1080eca9bd04aefe26bfdc1ae8a7ce3db7f3"
	I0911 11:55:37.104800 2250272 cri.go:89] found id: "5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9"
	I0911 11:55:37.104804 2250272 cri.go:89] found id: "e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0"
	I0911 11:55:37.104809 2250272 cri.go:89] found id: "5fe9ce438dbb0249e32982b221a832d59aa8c0b80125210e5e7d25739237ed08"
	I0911 11:55:37.104834 2250272 cri.go:89] found id: "e0e2f4e52d50ff3c3c93f858a14416317a89f637121d3a26730e2a55226f9eb3"
	I0911 11:55:37.104839 2250272 cri.go:89] found id: ""
	I0911 11:55:37.104895 2250272 ssh_runner.go:195] Run: sudo runc list -f json

                                                
                                                
** /stderr **
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-474712 -n pause-474712
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-474712 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-474712 logs -n 25: (1.323965569s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-640433 sudo                  | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo                  | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo                  | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo cat              | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo cat              | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo                  | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo                  | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo                  | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo find             | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo crio             | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-640433                       | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC | 11 Sep 23 11:54 UTC |
	| start   | -p force-systemd-flag-044713           | force-systemd-flag-044713 | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC | 11 Sep 23 11:55 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-690677                 | NoKubernetes-690677       | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC | 11 Sep 23 11:54 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-690677                 | NoKubernetes-690677       | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC | 11 Sep 23 11:54 UTC |
	| start   | -p NoKubernetes-690677                 | NoKubernetes-690677       | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC | 11 Sep 23 11:55 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-474712                        | pause-474712              | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC | 11 Sep 23 11:56 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-715426              | stopped-upgrade-715426    | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-690677 sudo            | NoKubernetes-690677       | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-690677                 | NoKubernetes-690677       | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC | 11 Sep 23 11:55 UTC |
	| start   | -p NoKubernetes-690677                 | NoKubernetes-690677       | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-044713 ssh cat      | force-systemd-flag-044713 | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC | 11 Sep 23 11:55 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-044713           | force-systemd-flag-044713 | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC | 11 Sep 23 11:55 UTC |
	| start   | -p force-systemd-env-901219            | force-systemd-env-901219  | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-715426              | stopped-upgrade-715426    | jenkins | v1.31.2 | 11 Sep 23 11:56 UTC | 11 Sep 23 11:56 UTC |
	| start   | -p cert-expiration-758549              | cert-expiration-758549    | jenkins | v1.31.2 | 11 Sep 23 11:56 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:56:17
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:56:17.354265 2251224 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:56:17.354390 2251224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:56:17.354394 2251224 out.go:309] Setting ErrFile to fd 2...
	I0911 11:56:17.354397 2251224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:56:17.354594 2251224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:56:17.355205 2251224 out.go:303] Setting JSON to false
	I0911 11:56:17.356200 2251224 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":236328,"bootTime":1694197049,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:56:17.356286 2251224 start.go:138] virtualization: kvm guest
	I0911 11:56:17.358785 2251224 out.go:177] * [cert-expiration-758549] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:56:17.360499 2251224 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:56:17.360514 2251224 notify.go:220] Checking for updates...
	I0911 11:56:17.362278 2251224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:56:17.364212 2251224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:56:17.365789 2251224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:56:17.367403 2251224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:56:17.370147 2251224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:56:17.372338 2251224 config.go:182] Loaded profile config "NoKubernetes-690677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0911 11:56:17.372485 2251224 config.go:182] Loaded profile config "force-systemd-env-901219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:56:17.372722 2251224 config.go:182] Loaded profile config "pause-474712": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:56:17.372874 2251224 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:56:17.415327 2251224 out.go:177] * Using the kvm2 driver based on user configuration
	I0911 11:56:17.417163 2251224 start.go:298] selected driver: kvm2
	I0911 11:56:17.417175 2251224 start.go:902] validating driver "kvm2" against <nil>
	I0911 11:56:17.417194 2251224 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:56:17.418061 2251224 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:56:17.418138 2251224 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 11:56:17.435154 2251224 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 11:56:17.435219 2251224 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 11:56:17.435527 2251224 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 11:56:17.435563 2251224 cni.go:84] Creating CNI manager for ""
	I0911 11:56:17.435574 2251224 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 11:56:17.435586 2251224 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 11:56:17.435594 2251224 start_flags.go:321] config:
	{Name:cert-expiration-758549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:cert-expiration-758549 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:56:17.435787 2251224 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:56:17.438208 2251224 out.go:177] * Starting control plane node cert-expiration-758549 in cluster cert-expiration-758549
	I0911 11:56:14.534995 2250699 main.go:141] libmachine: (NoKubernetes-690677) Calling .Start
	I0911 11:56:14.535266 2250699 main.go:141] libmachine: (NoKubernetes-690677) Ensuring networks are active...
	I0911 11:56:14.536093 2250699 main.go:141] libmachine: (NoKubernetes-690677) Ensuring network default is active
	I0911 11:56:14.536400 2250699 main.go:141] libmachine: (NoKubernetes-690677) Ensuring network mk-NoKubernetes-690677 is active
	I0911 11:56:14.536675 2250699 main.go:141] libmachine: (NoKubernetes-690677) Getting domain xml...
	I0911 11:56:14.537373 2250699 main.go:141] libmachine: (NoKubernetes-690677) Creating domain...
	I0911 11:56:15.956698 2250699 main.go:141] libmachine: (NoKubernetes-690677) Waiting to get IP...
	I0911 11:56:15.957772 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | domain NoKubernetes-690677 has defined MAC address 52:54:00:f2:a1:c8 in network mk-NoKubernetes-690677
	I0911 11:56:15.958240 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | unable to find current IP address of domain NoKubernetes-690677 in network mk-NoKubernetes-690677
	I0911 11:56:15.958300 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | I0911 11:56:15.958220 2251047 retry.go:31] will retry after 261.30094ms: waiting for machine to come up
	I0911 11:56:16.221060 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | domain NoKubernetes-690677 has defined MAC address 52:54:00:f2:a1:c8 in network mk-NoKubernetes-690677
	I0911 11:56:16.229665 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | unable to find current IP address of domain NoKubernetes-690677 in network mk-NoKubernetes-690677
	I0911 11:56:16.229686 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | I0911 11:56:16.229581 2251047 retry.go:31] will retry after 254.836162ms: waiting for machine to come up
	I0911 11:56:16.806921 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | domain NoKubernetes-690677 has defined MAC address 52:54:00:f2:a1:c8 in network mk-NoKubernetes-690677
	I0911 11:56:16.807532 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | unable to find current IP address of domain NoKubernetes-690677 in network mk-NoKubernetes-690677
	I0911 11:56:16.807548 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | I0911 11:56:16.807471 2251047 retry.go:31] will retry after 304.233051ms: waiting for machine to come up
	I0911 11:56:17.113164 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | domain NoKubernetes-690677 has defined MAC address 52:54:00:f2:a1:c8 in network mk-NoKubernetes-690677
	I0911 11:56:17.113728 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | unable to find current IP address of domain NoKubernetes-690677 in network mk-NoKubernetes-690677
	I0911 11:56:17.113746 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | I0911 11:56:17.113692 2251047 retry.go:31] will retry after 567.542372ms: waiting for machine to come up
	I0911 11:56:17.683159 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | domain NoKubernetes-690677 has defined MAC address 52:54:00:f2:a1:c8 in network mk-NoKubernetes-690677
	I0911 11:56:17.683708 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | unable to find current IP address of domain NoKubernetes-690677 in network mk-NoKubernetes-690677
	I0911 11:56:17.683729 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | I0911 11:56:17.683650 2251047 retry.go:31] will retry after 546.054012ms: waiting for machine to come up
	I0911 11:56:18.231255 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | domain NoKubernetes-690677 has defined MAC address 52:54:00:f2:a1:c8 in network mk-NoKubernetes-690677
	I0911 11:56:18.232477 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | unable to find current IP address of domain NoKubernetes-690677 in network mk-NoKubernetes-690677
	I0911 11:56:18.232500 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | I0911 11:56:18.232446 2251047 retry.go:31] will retry after 635.955649ms: waiting for machine to come up
	I0911 11:56:15.420232 2250272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:56:15.557544 2250272 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 11:56:15.557585 2250272 node_ready.go:35] waiting up to 6m0s for node "pause-474712" to be "Ready" ...
	I0911 11:56:15.568845 2250272 node_ready.go:49] node "pause-474712" has status "Ready":"True"
	I0911 11:56:15.568877 2250272 node_ready.go:38] duration metric: took 11.238336ms waiting for node "pause-474712" to be "Ready" ...
	I0911 11:56:15.568889 2250272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:56:15.577083 2250272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cdrcf" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:15.865648 2250272 pod_ready.go:92] pod "coredns-5dd5756b68-cdrcf" in "kube-system" namespace has status "Ready":"True"
	I0911 11:56:15.865674 2250272 pod_ready.go:81] duration metric: took 288.559758ms waiting for pod "coredns-5dd5756b68-cdrcf" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:15.865688 2250272 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:16.266006 2250272 pod_ready.go:92] pod "etcd-pause-474712" in "kube-system" namespace has status "Ready":"True"
	I0911 11:56:16.266034 2250272 pod_ready.go:81] duration metric: took 400.340587ms waiting for pod "etcd-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:16.266044 2250272 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:16.665658 2250272 pod_ready.go:92] pod "kube-apiserver-pause-474712" in "kube-system" namespace has status "Ready":"True"
	I0911 11:56:16.665685 2250272 pod_ready.go:81] duration metric: took 399.634066ms waiting for pod "kube-apiserver-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:16.665700 2250272 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:17.066746 2250272 pod_ready.go:92] pod "kube-controller-manager-pause-474712" in "kube-system" namespace has status "Ready":"True"
	I0911 11:56:17.066771 2250272 pod_ready.go:81] duration metric: took 401.063726ms waiting for pod "kube-controller-manager-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:17.066782 2250272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9krg2" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:17.465960 2250272 pod_ready.go:92] pod "kube-proxy-9krg2" in "kube-system" namespace has status "Ready":"True"
	I0911 11:56:17.465990 2250272 pod_ready.go:81] duration metric: took 399.201019ms waiting for pod "kube-proxy-9krg2" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:17.466003 2250272 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:17.867021 2250272 pod_ready.go:92] pod "kube-scheduler-pause-474712" in "kube-system" namespace has status "Ready":"True"
	I0911 11:56:17.867053 2250272 pod_ready.go:81] duration metric: took 401.041307ms waiting for pod "kube-scheduler-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:17.867065 2250272 pod_ready.go:38] duration metric: took 2.298162553s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:56:17.867089 2250272 api_server.go:52] waiting for apiserver process to appear ...
	I0911 11:56:17.867159 2250272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:56:17.883688 2250272 api_server.go:72] duration metric: took 2.466702807s to wait for apiserver process to appear ...
	I0911 11:56:17.883721 2250272 api_server.go:88] waiting for apiserver healthz status ...
	I0911 11:56:17.883743 2250272 api_server.go:253] Checking apiserver healthz at https://192.168.72.254:8443/healthz ...
	I0911 11:56:17.889883 2250272 api_server.go:279] https://192.168.72.254:8443/healthz returned 200:
	ok
	I0911 11:56:17.891435 2250272 api_server.go:141] control plane version: v1.28.1
	I0911 11:56:17.891478 2250272 api_server.go:131] duration metric: took 7.748541ms to wait for apiserver health ...
	I0911 11:56:17.891490 2250272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 11:56:18.067689 2250272 system_pods.go:59] 6 kube-system pods found
	I0911 11:56:18.067718 2250272 system_pods.go:61] "coredns-5dd5756b68-cdrcf" [97d977cc-d0ee-4026-bfa0-61a585f56fd0] Running
	I0911 11:56:18.067723 2250272 system_pods.go:61] "etcd-pause-474712" [e1aa8183-9c49-4158-857d-e047bf347717] Running
	I0911 11:56:18.067727 2250272 system_pods.go:61] "kube-apiserver-pause-474712" [f1ac2888-8999-4e7a-99d6-bced8d02e978] Running
	I0911 11:56:18.067732 2250272 system_pods.go:61] "kube-controller-manager-pause-474712" [87e777d3-c161-4e3f-a521-272fa93727ca] Running
	I0911 11:56:18.067736 2250272 system_pods.go:61] "kube-proxy-9krg2" [c812f7b6-5c49-4df6-9d4e-499bf70284b0] Running
	I0911 11:56:18.067739 2250272 system_pods.go:61] "kube-scheduler-pause-474712" [ff4c289a-0dd8-45eb-a195-e8c2d5792498] Running
	I0911 11:56:18.067745 2250272 system_pods.go:74] duration metric: took 176.249482ms to wait for pod list to return data ...
	I0911 11:56:18.067753 2250272 default_sa.go:34] waiting for default service account to be created ...
	I0911 11:56:18.264646 2250272 default_sa.go:45] found service account: "default"
	I0911 11:56:18.264677 2250272 default_sa.go:55] duration metric: took 196.919311ms for default service account to be created ...
	I0911 11:56:18.264686 2250272 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 11:56:18.468650 2250272 system_pods.go:86] 6 kube-system pods found
	I0911 11:56:18.468693 2250272 system_pods.go:89] "coredns-5dd5756b68-cdrcf" [97d977cc-d0ee-4026-bfa0-61a585f56fd0] Running
	I0911 11:56:18.468702 2250272 system_pods.go:89] "etcd-pause-474712" [e1aa8183-9c49-4158-857d-e047bf347717] Running
	I0911 11:56:18.468709 2250272 system_pods.go:89] "kube-apiserver-pause-474712" [f1ac2888-8999-4e7a-99d6-bced8d02e978] Running
	I0911 11:56:18.468717 2250272 system_pods.go:89] "kube-controller-manager-pause-474712" [87e777d3-c161-4e3f-a521-272fa93727ca] Running
	I0911 11:56:18.468724 2250272 system_pods.go:89] "kube-proxy-9krg2" [c812f7b6-5c49-4df6-9d4e-499bf70284b0] Running
	I0911 11:56:18.468730 2250272 system_pods.go:89] "kube-scheduler-pause-474712" [ff4c289a-0dd8-45eb-a195-e8c2d5792498] Running
	I0911 11:56:18.468739 2250272 system_pods.go:126] duration metric: took 204.047613ms to wait for k8s-apps to be running ...
	I0911 11:56:18.468748 2250272 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:56:18.468827 2250272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:56:18.485851 2250272 system_svc.go:56] duration metric: took 17.086421ms WaitForService to wait for kubelet.
	I0911 11:56:18.485889 2250272 kubeadm.go:581] duration metric: took 3.068911181s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:56:18.485920 2250272 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:56:18.665397 2250272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:56:18.665426 2250272 node_conditions.go:123] node cpu capacity is 2
	I0911 11:56:18.665436 2250272 node_conditions.go:105] duration metric: took 179.510016ms to run NodePressure ...
	I0911 11:56:18.665447 2250272 start.go:228] waiting for startup goroutines ...
	I0911 11:56:18.665453 2250272 start.go:233] waiting for cluster config update ...
	I0911 11:56:18.665460 2250272 start.go:242] writing updated cluster config ...
	I0911 11:56:18.665839 2250272 ssh_runner.go:195] Run: rm -f paused
	I0911 11:56:18.735379 2250272 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 11:56:18.738982 2250272 out.go:177] * Done! kubectl is now configured to use "pause-474712" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 11:53:35 UTC, ends at Mon 2023-09-11 11:56:19 UTC. --
	Sep 11 11:56:18 pause-474712 crio[2505]: time="2023-09-11 11:56:18.698975741Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-cdrcf,Uid:97d977cc-d0ee-4026-bfa0-61a585f56fd0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694433336266506920,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:54:21.542865040Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-474712,Uid:ea624c985eb19dbf55c562692246ceca,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694433336162374263,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ea624c985eb19dbf55c562692246ceca,kubernetes.io/config.seen: 2023-09-11T11:54:09.411323409Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-474712,Uid:7cbc1b6c173dfcbc815a42ad9f85335c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694433336149425577,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173df
cbc815a42ad9f85335c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7cbc1b6c173dfcbc815a42ad9f85335c,kubernetes.io/config.seen: 2023-09-11T11:54:09.411324415Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-474712,Uid:9412145c5fb4cf76c08abbc9005ae83d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694433336128231531,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.254:8443,kubernetes.io/config.hash: 9412145c5fb4cf76c08abbc9005ae83d,kubernetes.io/config.seen: 2023-09-11T11:54:09.411322227Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&PodSandboxMetadata{Name:kube-proxy-9krg2,Uid:c812f7b6-5c49-4df6-9d4e-499bf70284b0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694433336060510815,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:54:21.365968017Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&PodSandboxMetadata{Name:etcd-pause-474712,Uid:bc089628cfa85bdc4268697f305d8ec4,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694433336026707870,Labels:map[string]string{component: etcd,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.254:2379,kubernetes.io/config.hash: bc089628cfa85bdc4268697f305d8ec4,kubernetes.io/config.seen: 2023-09-11T11:54:09.411317853Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-474712,Uid:9412145c5fb4cf76c08abbc9005ae83d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1694433331039158194,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernete
s.io/kube-apiserver.advertise-address.endpoint: 192.168.72.254:8443,kubernetes.io/config.hash: 9412145c5fb4cf76c08abbc9005ae83d,kubernetes.io/config.seen: 2023-09-11T11:54:09.411322227Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-474712,Uid:ea624c985eb19dbf55c562692246ceca,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1694433331022756151,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ea624c985eb19dbf55c562692246ceca,kubernetes.io/config.seen: 2023-09-11T11:54:09.411323409Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/cha
in.go:25" id=22a29ef2-e189-4400-8cf5-42298127ca36 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 11 11:56:18 pause-474712 crio[2505]: time="2023-09-11 11:56:18.699661967Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e4cf3aa7-0de6-49f3-afd7-3479d3228823 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:56:18 pause-474712 crio[2505]: time="2023-09-11 11:56:18.699721438Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e4cf3aa7-0de6-49f3-afd7-3479d3228823 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:56:18 pause-474712 crio[2505]: time="2023-09-11 11:56:18.699991473Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e4cf3aa7-0de6-49f3-afd7-3479d3228823 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.255066546Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e6ad8b6c-5edc-4217-8c11-0b493c25a48d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.255241427Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e6ad8b6c-5edc-4217-8c11-0b493c25a48d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.255743693Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e6ad8b6c-5edc-4217-8c11-0b493c25a48d name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.301880740Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d9ba5cd1-1edb-4427-b8eb-ac1c82c0ed7c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.301951696Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d9ba5cd1-1edb-4427-b8eb-ac1c82c0ed7c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.302242006Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d9ba5cd1-1edb-4427-b8eb-ac1c82c0ed7c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.342072621Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=fcee8e6d-9366-4b8f-b7de-1f1912e666e4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.342141326Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fcee8e6d-9366-4b8f-b7de-1f1912e666e4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.342525216Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fcee8e6d-9366-4b8f-b7de-1f1912e666e4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.384655367Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f934be91-8f0b-4052-baef-0b7570f03137 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.384802631Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f934be91-8f0b-4052-baef-0b7570f03137 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.385129390Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f934be91-8f0b-4052-baef-0b7570f03137 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.427857380Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3c22f3d3-712a-41ae-906f-0216aa0c623f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.427954059Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3c22f3d3-712a-41ae-906f-0216aa0c623f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.428237922Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3c22f3d3-712a-41ae-906f-0216aa0c623f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.472955322Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aebc06a6-c452-4184-a112-93197a44a698 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.473032741Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aebc06a6-c452-4184-a112-93197a44a698 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.473336671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aebc06a6-c452-4184-a112-93197a44a698 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.513235605Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=3639723a-f167-46bd-af86-a87585a8fa1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.513330534Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=3639723a-f167-46bd-af86-a87585a8fa1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:19 pause-474712 crio[2505]: time="2023-09-11 11:56:19.513700846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=3639723a-f167-46bd-af86-a87585a8fa1c name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	641bc5c9392e3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   18 seconds ago      Running             coredns                   2                   81876dab9e684
	67fefc7cb4b4f       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   18 seconds ago      Running             kube-proxy                2                   9c581fdfd8c29
	9ddb3c53ef7e1       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   24 seconds ago      Running             kube-scheduler            3                   1c6296f89032c
	db980e15337c5       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   24 seconds ago      Running             kube-controller-manager   2                   c2df140325c64
	b912da38594e6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   24 seconds ago      Running             etcd                      3                   b6a4cbf99e959
	2bf1d48aef9bb       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   27 seconds ago      Running             kube-apiserver            2                   09719c868f4cd
	52f9b25d22d4f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   34 seconds ago      Exited              etcd                      2                   b6a4cbf99e959
	7ce23adc307e2       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   34 seconds ago      Exited              kube-scheduler            2                   1c6296f89032c
	e92dae80461ab       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   41 seconds ago      Exited              coredns                   1                   81876dab9e684
	fa5684fdafe5b       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   41 seconds ago      Exited              kube-proxy                1                   9c581fdfd8c29
	5442ef36f27e0       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   47 seconds ago      Exited              kube-controller-manager   1                   97653e3532840
	e53a2209537c2       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   47 seconds ago      Exited              kube-apiserver            1                   c77341ad163c5
	
	* 
	* ==> coredns [641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54710 - 8579 "HINFO IN 4109688945192574580.8872295817894910676. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013798248s
	
	* 
	* ==> coredns [e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45185 - 12936 "HINFO IN 1189010001419627576.687696111965202414. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010661557s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-474712
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-474712
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=pause-474712
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_54_09_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:54:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-474712
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:56:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:56:00 +0000   Mon, 11 Sep 2023 11:54:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:56:00 +0000   Mon, 11 Sep 2023 11:54:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:56:00 +0000   Mon, 11 Sep 2023 11:54:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:56:00 +0000   Mon, 11 Sep 2023 11:54:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.254
	  Hostname:    pause-474712
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 2833621091844586ab2599bce82e25b6
	  System UUID:                28336210-9184-4586-ab25-99bce82e25b6
	  Boot ID:                    9689fff7-4a54-45ab-b958-1d59fbb48245
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-cdrcf                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     118s
	  kube-system                 etcd-pause-474712                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-apiserver-pause-474712             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-controller-manager-pause-474712    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	  kube-system                 kube-proxy-9krg2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         118s
	  kube-system                 kube-scheduler-pause-474712             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m10s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 113s                   kube-proxy       
	  Normal  Starting                 18s                    kube-proxy       
	  Normal  Starting                 2m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m21s (x8 over 2m21s)  kubelet          Node pause-474712 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x8 over 2m21s)  kubelet          Node pause-474712 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m21s)  kubelet          Node pause-474712 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m10s                  kubelet          Node pause-474712 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m10s                  kubelet          Node pause-474712 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m10s                  kubelet          Node pause-474712 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m10s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m10s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m9s                   kubelet          Node pause-474712 status is now: NodeReady
	  Normal  RegisteredNode           119s                   node-controller  Node pause-474712 event: Registered Node pause-474712 in Controller
	  Normal  Starting                 25s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  25s (x8 over 25s)      kubelet          Node pause-474712 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    25s (x8 over 25s)      kubelet          Node pause-474712 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     25s (x7 over 25s)      kubelet          Node pause-474712 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  25s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6s                     node-controller  Node pause-474712 event: Registered Node pause-474712 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep11 11:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.117508] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.619535] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +4.901393] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.215184] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.386116] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.583996] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.127989] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.181200] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.121012] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.230400] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +10.526899] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[Sep11 11:54] systemd-fstab-generator[1265]: Ignoring "noauto" for root device
	[Sep11 11:55] kauditd_printk_skb: 19 callbacks suppressed
	[ +28.997668] systemd-fstab-generator[2230]: Ignoring "noauto" for root device
	[  +0.355637] systemd-fstab-generator[2279]: Ignoring "noauto" for root device
	[  +0.347828] systemd-fstab-generator[2298]: Ignoring "noauto" for root device
	[  +0.319018] systemd-fstab-generator[2320]: Ignoring "noauto" for root device
	[  +0.568968] systemd-fstab-generator[2395]: Ignoring "noauto" for root device
	[ +20.450961] systemd-fstab-generator[3386]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585] <==
	* {"level":"info","ts":"2023-09-11T11:55:46.097086Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"7.236203ms"}
	{"level":"info","ts":"2023-09-11T11:55:46.106487Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-09-11T11:55:46.118693Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"cd374d1b1758c885","local-member-id":"37479b9ddf6d22fd","commit-index":454}
	{"level":"info","ts":"2023-09-11T11:55:46.118883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd switched to configuration voters=()"}
	{"level":"info","ts":"2023-09-11T11:55:46.118954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd became follower at term 2"}
	{"level":"info","ts":"2023-09-11T11:55:46.118994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 37479b9ddf6d22fd [peers: [], term: 2, commit: 454, applied: 0, lastindex: 454, lastterm: 2]"}
	{"level":"warn","ts":"2023-09-11T11:55:46.124818Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-09-11T11:55:46.145813Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":428}
	{"level":"info","ts":"2023-09-11T11:55:46.148644Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-09-11T11:55:46.151922Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"37479b9ddf6d22fd","timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:55:46.152337Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"37479b9ddf6d22fd"}
	{"level":"info","ts":"2023-09-11T11:55:46.152422Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"37479b9ddf6d22fd","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-09-11T11:55:46.152779Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-09-11T11:55:46.15295Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:55:46.153005Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:55:46.153032Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:55:46.153456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd switched to configuration voters=(3983323497793135357)"}
	{"level":"info","ts":"2023-09-11T11:55:46.153641Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd374d1b1758c885","local-member-id":"37479b9ddf6d22fd","added-peer-id":"37479b9ddf6d22fd","added-peer-peer-urls":["https://192.168.72.254:2380"]}
	{"level":"info","ts":"2023-09-11T11:55:46.153789Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd374d1b1758c885","local-member-id":"37479b9ddf6d22fd","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:55:46.153835Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:55:46.160346Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T11:55:46.160921Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"37479b9ddf6d22fd","initial-advertise-peer-urls":["https://192.168.72.254:2380"],"listen-peer-urls":["https://192.168.72.254:2380"],"advertise-client-urls":["https://192.168.72.254:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.254:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T11:55:46.16075Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.254:2380"}
	{"level":"info","ts":"2023-09-11T11:55:46.161381Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.254:2380"}
	{"level":"info","ts":"2023-09-11T11:55:46.161325Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	* 
	* ==> etcd [b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163] <==
	* {"level":"info","ts":"2023-09-11T11:55:57.131532Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:55:57.131542Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:55:57.134638Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T11:55:57.13491Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"37479b9ddf6d22fd","initial-advertise-peer-urls":["https://192.168.72.254:2380"],"listen-peer-urls":["https://192.168.72.254:2380"],"advertise-client-urls":["https://192.168.72.254:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.254:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T11:55:57.135023Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-11T11:55:57.135269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd switched to configuration voters=(3983323497793135357)"}
	{"level":"info","ts":"2023-09-11T11:55:57.135344Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd374d1b1758c885","local-member-id":"37479b9ddf6d22fd","added-peer-id":"37479b9ddf6d22fd","added-peer-peer-urls":["https://192.168.72.254:2380"]}
	{"level":"info","ts":"2023-09-11T11:55:57.135454Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd374d1b1758c885","local-member-id":"37479b9ddf6d22fd","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:55:57.135498Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:55:57.137806Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.254:2380"}
	{"level":"info","ts":"2023-09-11T11:55:57.137907Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.254:2380"}
	{"level":"info","ts":"2023-09-11T11:55:58.376803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-11T11:55:58.376928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-11T11:55:58.376976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd received MsgPreVoteResp from 37479b9ddf6d22fd at term 2"}
	{"level":"info","ts":"2023-09-11T11:55:58.377021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd became candidate at term 3"}
	{"level":"info","ts":"2023-09-11T11:55:58.37705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd received MsgVoteResp from 37479b9ddf6d22fd at term 3"}
	{"level":"info","ts":"2023-09-11T11:55:58.377076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd became leader at term 3"}
	{"level":"info","ts":"2023-09-11T11:55:58.377102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 37479b9ddf6d22fd elected leader 37479b9ddf6d22fd at term 3"}
	{"level":"info","ts":"2023-09-11T11:55:58.382647Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"37479b9ddf6d22fd","local-member-attributes":"{Name:pause-474712 ClientURLs:[https://192.168.72.254:2379]}","request-path":"/0/members/37479b9ddf6d22fd/attributes","cluster-id":"cd374d1b1758c885","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:55:58.382661Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:55:58.382886Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T11:55:58.382935Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T11:55:58.382726Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:55:58.384158Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.254:2379"}
	{"level":"info","ts":"2023-09-11T11:55:58.384188Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:56:19 up 2 min,  0 users,  load average: 1.04, 0.51, 0.19
	Linux pause-474712 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b] <==
	* I0911 11:55:59.902337       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0911 11:55:59.902374       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0911 11:55:59.902408       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0911 11:55:59.947874       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0911 11:55:59.973179       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0911 11:55:59.973243       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0911 11:55:59.976750       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 11:55:59.977490       1 shared_informer.go:318] Caches are synced for configmaps
	I0911 11:55:59.977644       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 11:56:00.003995       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0911 11:56:00.004037       1 aggregator.go:166] initial CRD sync complete...
	I0911 11:56:00.004068       1 autoregister_controller.go:141] Starting autoregister controller
	I0911 11:56:00.004077       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0911 11:56:00.004087       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:56:00.011909       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:56:00.026293       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E0911 11:56:00.052175       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0911 11:56:00.851671       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0911 11:56:01.588023       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0911 11:56:01.599701       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0911 11:56:01.684763       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0911 11:56:01.746021       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 11:56:01.754713       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0911 11:56:13.055080       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 11:56:13.252717       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0] <==
	* 
	* 
	* ==> kube-controller-manager [5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9] <==
	* 
	* 
	* ==> kube-controller-manager [db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f] <==
	* I0911 11:56:13.053639       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0911 11:56:13.059289       1 shared_informer.go:318] Caches are synced for taint
	I0911 11:56:13.059524       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0911 11:56:13.059955       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-474712"
	I0911 11:56:13.060194       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0911 11:56:13.060366       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0911 11:56:13.060937       1 event.go:307] "Event occurred" object="pause-474712" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-474712 event: Registered Node pause-474712 in Controller"
	I0911 11:56:13.061288       1 taint_manager.go:211] "Sending events to api server"
	I0911 11:56:13.061411       1 shared_informer.go:318] Caches are synced for persistent volume
	I0911 11:56:13.064255       1 shared_informer.go:318] Caches are synced for node
	I0911 11:56:13.064514       1 range_allocator.go:174] "Sending events to api server"
	I0911 11:56:13.064775       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0911 11:56:13.064803       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0911 11:56:13.064913       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0911 11:56:13.066491       1 shared_informer.go:318] Caches are synced for endpoint
	I0911 11:56:13.071297       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0911 11:56:13.072663       1 shared_informer.go:318] Caches are synced for disruption
	I0911 11:56:13.077990       1 shared_informer.go:318] Caches are synced for attach detach
	I0911 11:56:13.144893       1 shared_informer.go:318] Caches are synced for namespace
	I0911 11:56:13.164993       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 11:56:13.181786       1 shared_informer.go:318] Caches are synced for service account
	I0911 11:56:13.194330       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 11:56:13.612976       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 11:56:13.648404       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 11:56:13.648458       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea] <==
	* I0911 11:56:01.136716       1 server_others.go:69] "Using iptables proxy"
	I0911 11:56:01.161445       1 node.go:141] Successfully retrieved node IP: 192.168.72.254
	I0911 11:56:01.245861       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 11:56:01.245951       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 11:56:01.250844       1 server_others.go:152] "Using iptables Proxier"
	I0911 11:56:01.250948       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 11:56:01.251126       1 server.go:846] "Version info" version="v1.28.1"
	I0911 11:56:01.251134       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:56:01.254927       1 config.go:188] "Starting service config controller"
	I0911 11:56:01.254970       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 11:56:01.254999       1 config.go:97] "Starting endpoint slice config controller"
	I0911 11:56:01.255003       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 11:56:01.255439       1 config.go:315] "Starting node config controller"
	I0911 11:56:01.255445       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 11:56:01.355350       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 11:56:01.355616       1 shared_informer.go:318] Caches are synced for node config
	I0911 11:56:01.355371       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580] <==
	* I0911 11:55:38.176426       1 server_others.go:69] "Using iptables proxy"
	E0911 11:55:38.181528       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-474712": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:39.361785       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-474712": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:41.486981       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-474712": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:46.150895       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-474712": dial tcp 192.168.72.254:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a] <==
	* E0911 11:55:47.246468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.254:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.246676       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.72.254:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.246726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.72.254:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.246861       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.72.254:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.246906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.72.254:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.247066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.72.254:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.247115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.72.254:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.248098       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.254:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.248161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.254:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.248128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.72.254:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.248286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.72.254:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.248285       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.72.254:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.248353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.72.254:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.248358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.72.254:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.248411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.72.254:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.248416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.72.254:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.248464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.72.254:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.248476       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.72.254:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.248529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.72.254:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.494920       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0911 11:55:47.495642       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0911 11:55:47.495740       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0911 11:55:47.495932       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:55:47.495968       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0911 11:55:47.496061       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24] <==
	* I0911 11:55:57.197218       1 serving.go:348] Generated self-signed cert in-memory
	W0911 11:55:59.920893       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 11:55:59.921021       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 11:55:59.921033       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 11:55:59.921040       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 11:55:59.953495       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0911 11:55:59.953657       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:55:59.955878       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0911 11:55:59.959126       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0911 11:55:59.959192       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:55:59.959214       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 11:56:00.060484       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 11:53:35 UTC, ends at Mon 2023-09-11 11:56:20 UTC. --
	Sep 11 11:55:54 pause-474712 kubelet[3392]: I0911 11:55:54.849161    3392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9412145c5fb4cf76c08abbc9005ae83d-usr-share-ca-certificates\") pod \"kube-apiserver-pause-474712\" (UID: \"9412145c5fb4cf76c08abbc9005ae83d\") " pod="kube-system/kube-apiserver-pause-474712"
	Sep 11 11:55:54 pause-474712 kubelet[3392]: I0911 11:55:54.849183    3392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea624c985eb19dbf55c562692246ceca-ca-certs\") pod \"kube-controller-manager-pause-474712\" (UID: \"ea624c985eb19dbf55c562692246ceca\") " pod="kube-system/kube-controller-manager-pause-474712"
	Sep 11 11:55:54 pause-474712 kubelet[3392]: I0911 11:55:54.849200    3392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea624c985eb19dbf55c562692246ceca-flexvolume-dir\") pod \"kube-controller-manager-pause-474712\" (UID: \"ea624c985eb19dbf55c562692246ceca\") " pod="kube-system/kube-controller-manager-pause-474712"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: E0911 11:55:55.041767    3392 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-474712?timeout=10s\": dial tcp 192.168.72.254:8443: connect: connection refused" interval="800ms"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: I0911 11:55:55.063189    3392 scope.go:117] "RemoveContainer" containerID="52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: I0911 11:55:55.064701    3392 scope.go:117] "RemoveContainer" containerID="e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: I0911 11:55:55.067905    3392 scope.go:117] "RemoveContainer" containerID="7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: I0911 11:55:55.068796    3392 scope.go:117] "RemoveContainer" containerID="5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: I0911 11:55:55.149288    3392 kubelet_node_status.go:70] "Attempting to register node" node="pause-474712"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: E0911 11:55:55.149746    3392 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.254:8443: connect: connection refused" node="pause-474712"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: I0911 11:55:55.951009    3392 kubelet_node_status.go:70] "Attempting to register node" node="pause-474712"
	Sep 11 11:55:59 pause-474712 kubelet[3392]: E0911 11:55:59.956352    3392 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-474712\" already exists" pod="kube-system/kube-apiserver-pause-474712"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.010363    3392 kubelet_node_status.go:108] "Node was previously registered" node="pause-474712"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.010489    3392 kubelet_node_status.go:73] "Successfully registered node" node="pause-474712"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.012325    3392 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.014493    3392 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.415257    3392 apiserver.go:52] "Watching apiserver"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.419422    3392 topology_manager.go:215] "Topology Admit Handler" podUID="c812f7b6-5c49-4df6-9d4e-499bf70284b0" podNamespace="kube-system" podName="kube-proxy-9krg2"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.419552    3392 topology_manager.go:215] "Topology Admit Handler" podUID="97d977cc-d0ee-4026-bfa0-61a585f56fd0" podNamespace="kube-system" podName="coredns-5dd5756b68-cdrcf"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.434940    3392 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.496272    3392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c812f7b6-5c49-4df6-9d4e-499bf70284b0-xtables-lock\") pod \"kube-proxy-9krg2\" (UID: \"c812f7b6-5c49-4df6-9d4e-499bf70284b0\") " pod="kube-system/kube-proxy-9krg2"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.496402    3392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c812f7b6-5c49-4df6-9d4e-499bf70284b0-lib-modules\") pod \"kube-proxy-9krg2\" (UID: \"c812f7b6-5c49-4df6-9d4e-499bf70284b0\") " pod="kube-system/kube-proxy-9krg2"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.720211    3392 scope.go:117] "RemoveContainer" containerID="e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.721115    3392 scope.go:117] "RemoveContainer" containerID="fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580"
	Sep 11 11:56:04 pause-474712 kubelet[3392]: I0911 11:56:04.255889    3392 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-474712 -n pause-474712
helpers_test.go:261: (dbg) Run:  kubectl --context pause-474712 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p pause-474712 -n pause-474712
helpers_test.go:244: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/SecondStartNoReconfiguration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p pause-474712 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p pause-474712 logs -n 25: (1.338957394s)
helpers_test.go:252: TestPause/serial/SecondStartNoReconfiguration logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-640433 sudo                  | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | cri-dockerd --version                  |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo                  | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | systemctl status containerd            |                           |         |         |                     |                     |
	|         | --all --full --no-pager                |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo                  | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | systemctl cat containerd               |                           |         |         |                     |                     |
	|         | --no-pager                             |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo cat              | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo cat              | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo                  | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | containerd config dump                 |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo                  | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | systemctl status crio --all            |                           |         |         |                     |                     |
	|         | --full --no-pager                      |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo                  | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo find             | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |         |         |                     |                     |
	| ssh     | -p cilium-640433 sudo crio             | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC |                     |
	|         | config                                 |                           |         |         |                     |                     |
	| delete  | -p cilium-640433                       | cilium-640433             | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC | 11 Sep 23 11:54 UTC |
	| start   | -p force-systemd-flag-044713           | force-systemd-flag-044713 | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC | 11 Sep 23 11:55 UTC |
	|         | --memory=2048 --force-systemd          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-690677                 | NoKubernetes-690677       | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC | 11 Sep 23 11:54 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-690677                 | NoKubernetes-690677       | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC | 11 Sep 23 11:54 UTC |
	| start   | -p NoKubernetes-690677                 | NoKubernetes-690677       | jenkins | v1.31.2 | 11 Sep 23 11:54 UTC | 11 Sep 23 11:55 UTC |
	|         | --no-kubernetes --driver=kvm2          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p pause-474712                        | pause-474712              | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC | 11 Sep 23 11:56 UTC |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| start   | -p stopped-upgrade-715426              | stopped-upgrade-715426    | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC |                     |
	|         | --memory=2200                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-690677 sudo            | NoKubernetes-690677       | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC |                     |
	|         | systemctl is-active --quiet            |                           |         |         |                     |                     |
	|         | service kubelet                        |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-690677                 | NoKubernetes-690677       | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC | 11 Sep 23 11:55 UTC |
	| start   | -p NoKubernetes-690677                 | NoKubernetes-690677       | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-044713 ssh cat      | force-systemd-flag-044713 | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC | 11 Sep 23 11:55 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-044713           | force-systemd-flag-044713 | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC | 11 Sep 23 11:55 UTC |
	| start   | -p force-systemd-env-901219            | force-systemd-env-901219  | jenkins | v1.31.2 | 11 Sep 23 11:55 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --alsologtostderr                      |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                     |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	| delete  | -p stopped-upgrade-715426              | stopped-upgrade-715426    | jenkins | v1.31.2 | 11 Sep 23 11:56 UTC | 11 Sep 23 11:56 UTC |
	| start   | -p cert-expiration-758549              | cert-expiration-758549    | jenkins | v1.31.2 | 11 Sep 23 11:56 UTC |                     |
	|         | --memory=2048                          |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |         |         |                     |                     |
	|         | --driver=kvm2                          |                           |         |         |                     |                     |
	|         | --container-runtime=crio               |                           |         |         |                     |                     |
	|---------|----------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 11:56:17
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 11:56:17.354265 2251224 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:56:17.354390 2251224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:56:17.354394 2251224 out.go:309] Setting ErrFile to fd 2...
	I0911 11:56:17.354397 2251224 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:56:17.354594 2251224 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:56:17.355205 2251224 out.go:303] Setting JSON to false
	I0911 11:56:17.356200 2251224 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":236328,"bootTime":1694197049,"procs":244,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:56:17.356286 2251224 start.go:138] virtualization: kvm guest
	I0911 11:56:17.358785 2251224 out.go:177] * [cert-expiration-758549] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:56:17.360499 2251224 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:56:17.360514 2251224 notify.go:220] Checking for updates...
	I0911 11:56:17.362278 2251224 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:56:17.364212 2251224 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:56:17.365789 2251224 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:56:17.367403 2251224 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:56:17.370147 2251224 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:56:17.372338 2251224 config.go:182] Loaded profile config "NoKubernetes-690677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0911 11:56:17.372485 2251224 config.go:182] Loaded profile config "force-systemd-env-901219": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:56:17.372722 2251224 config.go:182] Loaded profile config "pause-474712": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:56:17.372874 2251224 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:56:17.415327 2251224 out.go:177] * Using the kvm2 driver based on user configuration
	I0911 11:56:17.417163 2251224 start.go:298] selected driver: kvm2
	I0911 11:56:17.417175 2251224 start.go:902] validating driver "kvm2" against <nil>
	I0911 11:56:17.417194 2251224 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:56:17.418061 2251224 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:56:17.418138 2251224 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 11:56:17.435154 2251224 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 11:56:17.435219 2251224 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 11:56:17.435527 2251224 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 11:56:17.435563 2251224 cni.go:84] Creating CNI manager for ""
	I0911 11:56:17.435574 2251224 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 11:56:17.435586 2251224 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 11:56:17.435594 2251224 start_flags.go:321] config:
	{Name:cert-expiration-758549 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2048 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:cert-expiration-758549 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Container
Runtime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:3m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:56:17.435787 2251224 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 11:56:17.438208 2251224 out.go:177] * Starting control plane node cert-expiration-758549 in cluster cert-expiration-758549
	I0911 11:56:14.534995 2250699 main.go:141] libmachine: (NoKubernetes-690677) Calling .Start
	I0911 11:56:14.535266 2250699 main.go:141] libmachine: (NoKubernetes-690677) Ensuring networks are active...
	I0911 11:56:14.536093 2250699 main.go:141] libmachine: (NoKubernetes-690677) Ensuring network default is active
	I0911 11:56:14.536400 2250699 main.go:141] libmachine: (NoKubernetes-690677) Ensuring network mk-NoKubernetes-690677 is active
	I0911 11:56:14.536675 2250699 main.go:141] libmachine: (NoKubernetes-690677) Getting domain xml...
	I0911 11:56:14.537373 2250699 main.go:141] libmachine: (NoKubernetes-690677) Creating domain...
	I0911 11:56:15.956698 2250699 main.go:141] libmachine: (NoKubernetes-690677) Waiting to get IP...
	I0911 11:56:15.957772 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | domain NoKubernetes-690677 has defined MAC address 52:54:00:f2:a1:c8 in network mk-NoKubernetes-690677
	I0911 11:56:15.958240 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | unable to find current IP address of domain NoKubernetes-690677 in network mk-NoKubernetes-690677
	I0911 11:56:15.958300 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | I0911 11:56:15.958220 2251047 retry.go:31] will retry after 261.30094ms: waiting for machine to come up
	I0911 11:56:16.221060 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | domain NoKubernetes-690677 has defined MAC address 52:54:00:f2:a1:c8 in network mk-NoKubernetes-690677
	I0911 11:56:16.229665 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | unable to find current IP address of domain NoKubernetes-690677 in network mk-NoKubernetes-690677
	I0911 11:56:16.229686 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | I0911 11:56:16.229581 2251047 retry.go:31] will retry after 254.836162ms: waiting for machine to come up
	I0911 11:56:16.806921 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | domain NoKubernetes-690677 has defined MAC address 52:54:00:f2:a1:c8 in network mk-NoKubernetes-690677
	I0911 11:56:16.807532 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | unable to find current IP address of domain NoKubernetes-690677 in network mk-NoKubernetes-690677
	I0911 11:56:16.807548 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | I0911 11:56:16.807471 2251047 retry.go:31] will retry after 304.233051ms: waiting for machine to come up
	I0911 11:56:17.113164 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | domain NoKubernetes-690677 has defined MAC address 52:54:00:f2:a1:c8 in network mk-NoKubernetes-690677
	I0911 11:56:17.113728 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | unable to find current IP address of domain NoKubernetes-690677 in network mk-NoKubernetes-690677
	I0911 11:56:17.113746 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | I0911 11:56:17.113692 2251047 retry.go:31] will retry after 567.542372ms: waiting for machine to come up
	I0911 11:56:17.683159 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | domain NoKubernetes-690677 has defined MAC address 52:54:00:f2:a1:c8 in network mk-NoKubernetes-690677
	I0911 11:56:17.683708 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | unable to find current IP address of domain NoKubernetes-690677 in network mk-NoKubernetes-690677
	I0911 11:56:17.683729 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | I0911 11:56:17.683650 2251047 retry.go:31] will retry after 546.054012ms: waiting for machine to come up
	I0911 11:56:18.231255 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | domain NoKubernetes-690677 has defined MAC address 52:54:00:f2:a1:c8 in network mk-NoKubernetes-690677
	I0911 11:56:18.232477 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | unable to find current IP address of domain NoKubernetes-690677 in network mk-NoKubernetes-690677
	I0911 11:56:18.232500 2250699 main.go:141] libmachine: (NoKubernetes-690677) DBG | I0911 11:56:18.232446 2251047 retry.go:31] will retry after 635.955649ms: waiting for machine to come up
	I0911 11:56:15.420232 2250272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:56:15.557544 2250272 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 11:56:15.557585 2250272 node_ready.go:35] waiting up to 6m0s for node "pause-474712" to be "Ready" ...
	I0911 11:56:15.568845 2250272 node_ready.go:49] node "pause-474712" has status "Ready":"True"
	I0911 11:56:15.568877 2250272 node_ready.go:38] duration metric: took 11.238336ms waiting for node "pause-474712" to be "Ready" ...
	I0911 11:56:15.568889 2250272 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:56:15.577083 2250272 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cdrcf" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:15.865648 2250272 pod_ready.go:92] pod "coredns-5dd5756b68-cdrcf" in "kube-system" namespace has status "Ready":"True"
	I0911 11:56:15.865674 2250272 pod_ready.go:81] duration metric: took 288.559758ms waiting for pod "coredns-5dd5756b68-cdrcf" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:15.865688 2250272 pod_ready.go:78] waiting up to 6m0s for pod "etcd-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:16.266006 2250272 pod_ready.go:92] pod "etcd-pause-474712" in "kube-system" namespace has status "Ready":"True"
	I0911 11:56:16.266034 2250272 pod_ready.go:81] duration metric: took 400.340587ms waiting for pod "etcd-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:16.266044 2250272 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:16.665658 2250272 pod_ready.go:92] pod "kube-apiserver-pause-474712" in "kube-system" namespace has status "Ready":"True"
	I0911 11:56:16.665685 2250272 pod_ready.go:81] duration metric: took 399.634066ms waiting for pod "kube-apiserver-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:16.665700 2250272 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:17.066746 2250272 pod_ready.go:92] pod "kube-controller-manager-pause-474712" in "kube-system" namespace has status "Ready":"True"
	I0911 11:56:17.066771 2250272 pod_ready.go:81] duration metric: took 401.063726ms waiting for pod "kube-controller-manager-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:17.066782 2250272 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-9krg2" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:17.465960 2250272 pod_ready.go:92] pod "kube-proxy-9krg2" in "kube-system" namespace has status "Ready":"True"
	I0911 11:56:17.465990 2250272 pod_ready.go:81] duration metric: took 399.201019ms waiting for pod "kube-proxy-9krg2" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:17.466003 2250272 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:17.867021 2250272 pod_ready.go:92] pod "kube-scheduler-pause-474712" in "kube-system" namespace has status "Ready":"True"
	I0911 11:56:17.867053 2250272 pod_ready.go:81] duration metric: took 401.041307ms waiting for pod "kube-scheduler-pause-474712" in "kube-system" namespace to be "Ready" ...
	I0911 11:56:17.867065 2250272 pod_ready.go:38] duration metric: took 2.298162553s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 11:56:17.867089 2250272 api_server.go:52] waiting for apiserver process to appear ...
	I0911 11:56:17.867159 2250272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:56:17.883688 2250272 api_server.go:72] duration metric: took 2.466702807s to wait for apiserver process to appear ...
	I0911 11:56:17.883721 2250272 api_server.go:88] waiting for apiserver healthz status ...
	I0911 11:56:17.883743 2250272 api_server.go:253] Checking apiserver healthz at https://192.168.72.254:8443/healthz ...
	I0911 11:56:17.889883 2250272 api_server.go:279] https://192.168.72.254:8443/healthz returned 200:
	ok
	I0911 11:56:17.891435 2250272 api_server.go:141] control plane version: v1.28.1
	I0911 11:56:17.891478 2250272 api_server.go:131] duration metric: took 7.748541ms to wait for apiserver health ...
	I0911 11:56:17.891490 2250272 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 11:56:18.067689 2250272 system_pods.go:59] 6 kube-system pods found
	I0911 11:56:18.067718 2250272 system_pods.go:61] "coredns-5dd5756b68-cdrcf" [97d977cc-d0ee-4026-bfa0-61a585f56fd0] Running
	I0911 11:56:18.067723 2250272 system_pods.go:61] "etcd-pause-474712" [e1aa8183-9c49-4158-857d-e047bf347717] Running
	I0911 11:56:18.067727 2250272 system_pods.go:61] "kube-apiserver-pause-474712" [f1ac2888-8999-4e7a-99d6-bced8d02e978] Running
	I0911 11:56:18.067732 2250272 system_pods.go:61] "kube-controller-manager-pause-474712" [87e777d3-c161-4e3f-a521-272fa93727ca] Running
	I0911 11:56:18.067736 2250272 system_pods.go:61] "kube-proxy-9krg2" [c812f7b6-5c49-4df6-9d4e-499bf70284b0] Running
	I0911 11:56:18.067739 2250272 system_pods.go:61] "kube-scheduler-pause-474712" [ff4c289a-0dd8-45eb-a195-e8c2d5792498] Running
	I0911 11:56:18.067745 2250272 system_pods.go:74] duration metric: took 176.249482ms to wait for pod list to return data ...
	I0911 11:56:18.067753 2250272 default_sa.go:34] waiting for default service account to be created ...
	I0911 11:56:18.264646 2250272 default_sa.go:45] found service account: "default"
	I0911 11:56:18.264677 2250272 default_sa.go:55] duration metric: took 196.919311ms for default service account to be created ...
	I0911 11:56:18.264686 2250272 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 11:56:18.468650 2250272 system_pods.go:86] 6 kube-system pods found
	I0911 11:56:18.468693 2250272 system_pods.go:89] "coredns-5dd5756b68-cdrcf" [97d977cc-d0ee-4026-bfa0-61a585f56fd0] Running
	I0911 11:56:18.468702 2250272 system_pods.go:89] "etcd-pause-474712" [e1aa8183-9c49-4158-857d-e047bf347717] Running
	I0911 11:56:18.468709 2250272 system_pods.go:89] "kube-apiserver-pause-474712" [f1ac2888-8999-4e7a-99d6-bced8d02e978] Running
	I0911 11:56:18.468717 2250272 system_pods.go:89] "kube-controller-manager-pause-474712" [87e777d3-c161-4e3f-a521-272fa93727ca] Running
	I0911 11:56:18.468724 2250272 system_pods.go:89] "kube-proxy-9krg2" [c812f7b6-5c49-4df6-9d4e-499bf70284b0] Running
	I0911 11:56:18.468730 2250272 system_pods.go:89] "kube-scheduler-pause-474712" [ff4c289a-0dd8-45eb-a195-e8c2d5792498] Running
	I0911 11:56:18.468739 2250272 system_pods.go:126] duration metric: took 204.047613ms to wait for k8s-apps to be running ...
	I0911 11:56:18.468748 2250272 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 11:56:18.468827 2250272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:56:18.485851 2250272 system_svc.go:56] duration metric: took 17.086421ms WaitForService to wait for kubelet.
	I0911 11:56:18.485889 2250272 kubeadm.go:581] duration metric: took 3.068911181s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 11:56:18.485920 2250272 node_conditions.go:102] verifying NodePressure condition ...
	I0911 11:56:18.665397 2250272 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 11:56:18.665426 2250272 node_conditions.go:123] node cpu capacity is 2
	I0911 11:56:18.665436 2250272 node_conditions.go:105] duration metric: took 179.510016ms to run NodePressure ...
	I0911 11:56:18.665447 2250272 start.go:228] waiting for startup goroutines ...
	I0911 11:56:18.665453 2250272 start.go:233] waiting for cluster config update ...
	I0911 11:56:18.665460 2250272 start.go:242] writing updated cluster config ...
	I0911 11:56:18.665839 2250272 ssh_runner.go:195] Run: rm -f paused
	I0911 11:56:18.735379 2250272 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 11:56:18.738982 2250272 out.go:177] * Done! kubectl is now configured to use "pause-474712" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 11:53:35 UTC, ends at Mon 2023-09-11 11:56:21 UTC. --
	Sep 11 11:56:20 pause-474712 crio[2505]: time="2023-09-11 11:56:20.705969482Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-cdrcf,Uid:97d977cc-d0ee-4026-bfa0-61a585f56fd0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694433336266506920,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:54:21.542865040Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-474712,Uid:ea624c985eb19dbf55c562692246ceca,Namespace:kub
e-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694433336162374263,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ea624c985eb19dbf55c562692246ceca,kubernetes.io/config.seen: 2023-09-11T11:54:09.411323409Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&PodSandboxMetadata{Name:kube-scheduler-pause-474712,Uid:7cbc1b6c173dfcbc815a42ad9f85335c,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694433336149425577,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173df
cbc815a42ad9f85335c,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 7cbc1b6c173dfcbc815a42ad9f85335c,kubernetes.io/config.seen: 2023-09-11T11:54:09.411324415Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-474712,Uid:9412145c5fb4cf76c08abbc9005ae83d,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694433336128231531,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.254:8443,kubernetes.io/config.hash: 9412145c5fb4cf76c08abbc9005ae83d,kubernetes.io/config.seen: 2023-09-11T11:54:09.411322227Z,kubernetes.io/config.source:
file,},RuntimeHandler:,},&PodSandbox{Id:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&PodSandboxMetadata{Name:kube-proxy-9krg2,Uid:c812f7b6-5c49-4df6-9d4e-499bf70284b0,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694433336060510815,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T11:54:21.365968017Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&PodSandboxMetadata{Name:etcd-pause-474712,Uid:bc089628cfa85bdc4268697f305d8ec4,Namespace:kube-system,Attempt:2,},State:SANDBOX_READY,CreatedAt:1694433336026707870,Labels:map[string]string{component: etcd,io.kubernetes.contai
ner.name: POD,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.254:2379,kubernetes.io/config.hash: bc089628cfa85bdc4268697f305d8ec4,kubernetes.io/config.seen: 2023-09-11T11:54:09.411317853Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&PodSandboxMetadata{Name:kube-apiserver-pause-474712,Uid:9412145c5fb4cf76c08abbc9005ae83d,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1694433331039158194,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernete
s.io/kube-apiserver.advertise-address.endpoint: 192.168.72.254:8443,kubernetes.io/config.hash: 9412145c5fb4cf76c08abbc9005ae83d,kubernetes.io/config.seen: 2023-09-11T11:54:09.411322227Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-pause-474712,Uid:ea624c985eb19dbf55c562692246ceca,Namespace:kube-system,Attempt:1,},State:SANDBOX_NOTREADY,CreatedAt:1694433331022756151,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ea624c985eb19dbf55c562692246ceca,kubernetes.io/config.seen: 2023-09-11T11:54:09.411323409Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/cha
in.go:25" id=19723902-49e5-457e-b061-4b5e05ba30be name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 11 11:56:20 pause-474712 crio[2505]: time="2023-09-11 11:56:20.706936271Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=77d213bf-7051-48d7-a180-17fada2042b8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:56:20 pause-474712 crio[2505]: time="2023-09-11 11:56:20.707018398Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=77d213bf-7051-48d7-a180-17fada2042b8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:56:20 pause-474712 crio[2505]: time="2023-09-11 11:56:20.707477010Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=77d213bf-7051-48d7-a180-17fada2042b8 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.238024830Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=dbb4c979-6828-40ed-a9b7-582a41e7a472 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.238130054Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=dbb4c979-6828-40ed-a9b7-582a41e7a472 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.238482385Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=dbb4c979-6828-40ed-a9b7-582a41e7a472 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.278829648Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a66c67e0-d0c5-4ba9-8c81-f31e97030846 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.278898644Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a66c67e0-d0c5-4ba9-8c81-f31e97030846 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.279809967Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a66c67e0-d0c5-4ba9-8c81-f31e97030846 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.324142283Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7f4bd65b-2699-4f27-81d4-262ddd711cbc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.324221203Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7f4bd65b-2699-4f27-81d4-262ddd711cbc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.324490099Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7f4bd65b-2699-4f27-81d4-262ddd711cbc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.382107274Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2b616d6d-99f3-4fd4-9a46-a8f5cd2639c3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.382176646Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2b616d6d-99f3-4fd4-9a46-a8f5cd2639c3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.382545854Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2b616d6d-99f3-4fd4-9a46-a8f5cd2639c3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.431521248Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=796929e1-eaaf-4958-9da9-a5a8a5839713 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.431651686Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=796929e1-eaaf-4958-9da9-a5a8a5839713 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.431884350Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=796929e1-eaaf-4958-9da9-a5a8a5839713 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.477354538Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=76b9206b-4b2e-466a-9b77-071f18f00ff7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.477421735Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=76b9206b-4b2e-466a-9b77-071f18f00ff7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.477788316Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=76b9206b-4b2e-466a-9b77-071f18f00ff7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.534980806Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c70d1bef-ef61-4878-8568-dadb5a27fd0e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.535062241Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c70d1bef-ef61-4878-8568-dadb5a27fd0e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 11:56:21 pause-474712 crio[2505]: time="2023-09-11 11:56:21.535309694Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694433360773688391,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694433360751396181,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.
uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694433355101101883,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:
map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694433355156221117,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]strin
g{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f,PodSandboxId:c2df140325c6433e7085b1564884ffb2613450c5b700013e51f3fbc26a3c1ff4,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694433355125300412,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ea624c985eb19dbf55c562692246ceca,},Annotati
ons:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b,PodSandboxId:09719c868f4cd8eadde99ee0dc85703eb9d48334a6d92dfd410268331fa81f1a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694433352515626887,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[string]s
tring{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585,PodSandboxId:b6a4cbf99e959f37b1bfc4c0ce73b0a5673c6ec621719da59a258f3fef3e3818,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_EXITED,CreatedAt:1694433345364968945,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bc089628cfa85bdc4268697f305d8ec4,},Annotations:map[string]string{io.kubernetes.container.hash: b1c79dfc,io.kubernete
s.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a,PodSandboxId:1c6296f89032c45204b79b985fb5e81306909bcbbbf50356597d82967c26e3d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_EXITED,CreatedAt:1694433345278257484,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cbc1b6c173dfcbc815a42ad9f85335c,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.resta
rtCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d,PodSandboxId:81876dab9e684df6b917bc5e0bfe52404d9759184c143fa5c5e7ac25b15b7da1,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_EXITED,CreatedAt:1694433338061494936,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cdrcf,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 97d977cc-d0ee-4026-bfa0-61a585f56fd0,},Annotations:map[string]string{io.kubernetes.container.hash: 603195ec,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580,PodSandboxId:9c581fdfd8c2975a8e3fe16e4c1ae266a11c89e4992766c780434fe469071187,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_EXITED,CreatedAt:1694433337900168027,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-9krg2,io.kubernetes.pod.namespace: kube-system
,io.kubernetes.pod.uid: c812f7b6-5c49-4df6-9d4e-499bf70284b0,},Annotations:map[string]string{io.kubernetes.container.hash: 6c714c86,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9,PodSandboxId:97653e353284042716d109f5c06a15b981d953809ae39830dab6db55e52f1748,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,State:CONTAINER_EXITED,CreatedAt:1694433332344156256,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
ea624c985eb19dbf55c562692246ceca,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0,PodSandboxId:c77341ad163c568ee88b0e81dda16c24733a7e7ecc05ecb6fa43d1985d60335f,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,State:CONTAINER_EXITED,CreatedAt:1694433332092742703,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-pause-474712,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9412145c5fb4cf76c08abbc9005ae83d,},Annotations:map[stri
ng]string{io.kubernetes.container.hash: 3016c676,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c70d1bef-ef61-4878-8568-dadb5a27fd0e name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	641bc5c9392e3       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   20 seconds ago      Running             coredns                   2                   81876dab9e684
	67fefc7cb4b4f       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   20 seconds ago      Running             kube-proxy                2                   9c581fdfd8c29
	9ddb3c53ef7e1       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   26 seconds ago      Running             kube-scheduler            3                   1c6296f89032c
	db980e15337c5       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   26 seconds ago      Running             kube-controller-manager   2                   c2df140325c64
	b912da38594e6       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   26 seconds ago      Running             etcd                      3                   b6a4cbf99e959
	2bf1d48aef9bb       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   29 seconds ago      Running             kube-apiserver            2                   09719c868f4cd
	52f9b25d22d4f       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   36 seconds ago      Exited              etcd                      2                   b6a4cbf99e959
	7ce23adc307e2       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   36 seconds ago      Exited              kube-scheduler            2                   1c6296f89032c
	e92dae80461ab       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   43 seconds ago      Exited              coredns                   1                   81876dab9e684
	fa5684fdafe5b       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   43 seconds ago      Exited              kube-proxy                1                   9c581fdfd8c29
	5442ef36f27e0       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   49 seconds ago      Exited              kube-controller-manager   1                   97653e3532840
	e53a2209537c2       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   49 seconds ago      Exited              kube-apiserver            1                   c77341ad163c5
	
	* 
	* ==> coredns [641bc5c9392e3169c05c2e0952398cef6c4d18d3297bdfa1587fdfcb33429283] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:54710 - 8579 "HINFO IN 4109688945192574580.8872295817894910676. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.013798248s
	
	* 
	* ==> coredns [e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d] <==
	* [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:45185 - 12936 "HINFO IN 1189010001419627576.687696111965202414. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010661557s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	
	* 
	* ==> describe nodes <==
	* Name:               pause-474712
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-474712
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=pause-474712
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_54_09_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:54:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-474712
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 11:56:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 11:56:00 +0000   Mon, 11 Sep 2023 11:54:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 11:56:00 +0000   Mon, 11 Sep 2023 11:54:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 11:56:00 +0000   Mon, 11 Sep 2023 11:54:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 11:56:00 +0000   Mon, 11 Sep 2023 11:54:10 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.254
	  Hostname:    pause-474712
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2017420Ki
	  pods:               110
	System Info:
	  Machine ID:                 2833621091844586ab2599bce82e25b6
	  System UUID:                28336210-9184-4586-ab25-99bce82e25b6
	  Boot ID:                    9689fff7-4a54-45ab-b958-1d59fbb48245
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-cdrcf                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     2m
	  kube-system                 etcd-pause-474712                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (5%!)(MISSING)       0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-apiserver-pause-474712             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-controller-manager-pause-474712    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	  kube-system                 kube-proxy-9krg2                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m
	  kube-system                 kube-scheduler-pause-474712             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             170Mi (8%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 115s                   kube-proxy       
	  Normal  Starting                 20s                    kube-proxy       
	  Normal  Starting                 2m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m23s (x8 over 2m23s)  kubelet          Node pause-474712 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m23s (x8 over 2m23s)  kubelet          Node pause-474712 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m23s (x7 over 2m23s)  kubelet          Node pause-474712 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     2m12s                  kubelet          Node pause-474712 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  2m12s                  kubelet          Node pause-474712 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m12s                  kubelet          Node pause-474712 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 2m12s                  kubelet          Starting kubelet.
	  Normal  NodeReady                2m11s                  kubelet          Node pause-474712 status is now: NodeReady
	  Normal  RegisteredNode           2m1s                   node-controller  Node pause-474712 event: Registered Node pause-474712 in Controller
	  Normal  Starting                 27s                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27s (x8 over 27s)      kubelet          Node pause-474712 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27s (x8 over 27s)      kubelet          Node pause-474712 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27s (x7 over 27s)      kubelet          Node pause-474712 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27s                    kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           8s                     node-controller  Node pause-474712 event: Registered Node pause-474712 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep11 11:53] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.117508] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.619535] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +4.901393] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.215184] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000004] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +5.386116] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.583996] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.127989] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.181200] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.121012] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.230400] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +10.526899] systemd-fstab-generator[927]: Ignoring "noauto" for root device
	[Sep11 11:54] systemd-fstab-generator[1265]: Ignoring "noauto" for root device
	[Sep11 11:55] kauditd_printk_skb: 19 callbacks suppressed
	[ +28.997668] systemd-fstab-generator[2230]: Ignoring "noauto" for root device
	[  +0.355637] systemd-fstab-generator[2279]: Ignoring "noauto" for root device
	[  +0.347828] systemd-fstab-generator[2298]: Ignoring "noauto" for root device
	[  +0.319018] systemd-fstab-generator[2320]: Ignoring "noauto" for root device
	[  +0.568968] systemd-fstab-generator[2395]: Ignoring "noauto" for root device
	[ +20.450961] systemd-fstab-generator[3386]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585] <==
	* {"level":"info","ts":"2023-09-11T11:55:46.097086Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"7.236203ms"}
	{"level":"info","ts":"2023-09-11T11:55:46.106487Z","caller":"etcdserver/server.go:530","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2023-09-11T11:55:46.118693Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"cd374d1b1758c885","local-member-id":"37479b9ddf6d22fd","commit-index":454}
	{"level":"info","ts":"2023-09-11T11:55:46.118883Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd switched to configuration voters=()"}
	{"level":"info","ts":"2023-09-11T11:55:46.118954Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd became follower at term 2"}
	{"level":"info","ts":"2023-09-11T11:55:46.118994Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 37479b9ddf6d22fd [peers: [], term: 2, commit: 454, applied: 0, lastindex: 454, lastterm: 2]"}
	{"level":"warn","ts":"2023-09-11T11:55:46.124818Z","caller":"auth/store.go:1238","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2023-09-11T11:55:46.145813Z","caller":"mvcc/kvstore.go:393","msg":"kvstore restored","current-rev":428}
	{"level":"info","ts":"2023-09-11T11:55:46.148644Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2023-09-11T11:55:46.151922Z","caller":"etcdserver/corrupt.go:95","msg":"starting initial corruption check","local-member-id":"37479b9ddf6d22fd","timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:55:46.152337Z","caller":"etcdserver/corrupt.go:165","msg":"initial corruption checking passed; no corruption","local-member-id":"37479b9ddf6d22fd"}
	{"level":"info","ts":"2023-09-11T11:55:46.152422Z","caller":"etcdserver/server.go:854","msg":"starting etcd server","local-member-id":"37479b9ddf6d22fd","local-server-version":"3.5.9","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2023-09-11T11:55:46.152779Z","caller":"etcdserver/server.go:754","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2023-09-11T11:55:46.15295Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:55:46.153005Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:55:46.153032Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:55:46.153456Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd switched to configuration voters=(3983323497793135357)"}
	{"level":"info","ts":"2023-09-11T11:55:46.153641Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd374d1b1758c885","local-member-id":"37479b9ddf6d22fd","added-peer-id":"37479b9ddf6d22fd","added-peer-peer-urls":["https://192.168.72.254:2380"]}
	{"level":"info","ts":"2023-09-11T11:55:46.153789Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd374d1b1758c885","local-member-id":"37479b9ddf6d22fd","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:55:46.153835Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:55:46.160346Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T11:55:46.160921Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"37479b9ddf6d22fd","initial-advertise-peer-urls":["https://192.168.72.254:2380"],"listen-peer-urls":["https://192.168.72.254:2380"],"advertise-client-urls":["https://192.168.72.254:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.254:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T11:55:46.16075Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.254:2380"}
	{"level":"info","ts":"2023-09-11T11:55:46.161381Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.254:2380"}
	{"level":"info","ts":"2023-09-11T11:55:46.161325Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	
	* 
	* ==> etcd [b912da38594e6eb60f803da82e904c92fa37c5205fcaaef00e026149c1966163] <==
	* {"level":"info","ts":"2023-09-11T11:55:57.131532Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:55:57.131542Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T11:55:57.134638Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T11:55:57.13491Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"37479b9ddf6d22fd","initial-advertise-peer-urls":["https://192.168.72.254:2380"],"listen-peer-urls":["https://192.168.72.254:2380"],"advertise-client-urls":["https://192.168.72.254:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.254:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T11:55:57.135023Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-11T11:55:57.135269Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd switched to configuration voters=(3983323497793135357)"}
	{"level":"info","ts":"2023-09-11T11:55:57.135344Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cd374d1b1758c885","local-member-id":"37479b9ddf6d22fd","added-peer-id":"37479b9ddf6d22fd","added-peer-peer-urls":["https://192.168.72.254:2380"]}
	{"level":"info","ts":"2023-09-11T11:55:57.135454Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cd374d1b1758c885","local-member-id":"37479b9ddf6d22fd","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:55:57.135498Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T11:55:57.137806Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.254:2380"}
	{"level":"info","ts":"2023-09-11T11:55:57.137907Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.254:2380"}
	{"level":"info","ts":"2023-09-11T11:55:58.376803Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd is starting a new election at term 2"}
	{"level":"info","ts":"2023-09-11T11:55:58.376928Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-11T11:55:58.376976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd received MsgPreVoteResp from 37479b9ddf6d22fd at term 2"}
	{"level":"info","ts":"2023-09-11T11:55:58.377021Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd became candidate at term 3"}
	{"level":"info","ts":"2023-09-11T11:55:58.37705Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd received MsgVoteResp from 37479b9ddf6d22fd at term 3"}
	{"level":"info","ts":"2023-09-11T11:55:58.377076Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"37479b9ddf6d22fd became leader at term 3"}
	{"level":"info","ts":"2023-09-11T11:55:58.377102Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 37479b9ddf6d22fd elected leader 37479b9ddf6d22fd at term 3"}
	{"level":"info","ts":"2023-09-11T11:55:58.382647Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"37479b9ddf6d22fd","local-member-attributes":"{Name:pause-474712 ClientURLs:[https://192.168.72.254:2379]}","request-path":"/0/members/37479b9ddf6d22fd/attributes","cluster-id":"cd374d1b1758c885","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T11:55:58.382661Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:55:58.382886Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T11:55:58.382935Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T11:55:58.382726Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T11:55:58.384158Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.254:2379"}
	{"level":"info","ts":"2023-09-11T11:55:58.384188Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	* 
	* ==> kernel <==
	*  11:56:21 up 2 min,  0 users,  load average: 1.04, 0.51, 0.19
	Linux pause-474712 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2bf1d48aef9bb510a387bd7792bf8d0e8a2301ce16f36f0948a0f97a6fc1171b] <==
	* I0911 11:55:59.902337       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0911 11:55:59.902374       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0911 11:55:59.902408       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0911 11:55:59.947874       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0911 11:55:59.973179       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0911 11:55:59.973243       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0911 11:55:59.976750       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0911 11:55:59.977490       1 shared_informer.go:318] Caches are synced for configmaps
	I0911 11:55:59.977644       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0911 11:56:00.003995       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0911 11:56:00.004037       1 aggregator.go:166] initial CRD sync complete...
	I0911 11:56:00.004068       1 autoregister_controller.go:141] Starting autoregister controller
	I0911 11:56:00.004077       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0911 11:56:00.004087       1 cache.go:39] Caches are synced for autoregister controller
	I0911 11:56:00.011909       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0911 11:56:00.026293       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	E0911 11:56:00.052175       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0911 11:56:00.851671       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0911 11:56:01.588023       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0911 11:56:01.599701       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0911 11:56:01.684763       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0911 11:56:01.746021       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0911 11:56:01.754713       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0911 11:56:13.055080       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0911 11:56:13.252717       1 controller.go:624] quota admission added evaluator for: endpoints
	
	* 
	* ==> kube-apiserver [e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0] <==
	* 
	* 
	* ==> kube-controller-manager [5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9] <==
	* 
	* 
	* ==> kube-controller-manager [db980e15337c51b336ce1c9efe62bbdd5223ea8d6a84c43329b5b49fc48ce16f] <==
	* I0911 11:56:13.053639       1 shared_informer.go:318] Caches are synced for certificate-csrsigning-legacy-unknown
	I0911 11:56:13.059289       1 shared_informer.go:318] Caches are synced for taint
	I0911 11:56:13.059524       1 node_lifecycle_controller.go:1225] "Initializing eviction metric for zone" zone=""
	I0911 11:56:13.059955       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="pause-474712"
	I0911 11:56:13.060194       1 node_lifecycle_controller.go:1071] "Controller detected that zone is now in new state" zone="" newState="Normal"
	I0911 11:56:13.060366       1 taint_manager.go:206] "Starting NoExecuteTaintManager"
	I0911 11:56:13.060937       1 event.go:307] "Event occurred" object="pause-474712" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node pause-474712 event: Registered Node pause-474712 in Controller"
	I0911 11:56:13.061288       1 taint_manager.go:211] "Sending events to api server"
	I0911 11:56:13.061411       1 shared_informer.go:318] Caches are synced for persistent volume
	I0911 11:56:13.064255       1 shared_informer.go:318] Caches are synced for node
	I0911 11:56:13.064514       1 range_allocator.go:174] "Sending events to api server"
	I0911 11:56:13.064775       1 range_allocator.go:178] "Starting range CIDR allocator"
	I0911 11:56:13.064803       1 shared_informer.go:311] Waiting for caches to sync for cidrallocator
	I0911 11:56:13.064913       1 shared_informer.go:318] Caches are synced for cidrallocator
	I0911 11:56:13.066491       1 shared_informer.go:318] Caches are synced for endpoint
	I0911 11:56:13.071297       1 shared_informer.go:318] Caches are synced for endpoint_slice_mirroring
	I0911 11:56:13.072663       1 shared_informer.go:318] Caches are synced for disruption
	I0911 11:56:13.077990       1 shared_informer.go:318] Caches are synced for attach detach
	I0911 11:56:13.144893       1 shared_informer.go:318] Caches are synced for namespace
	I0911 11:56:13.164993       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 11:56:13.181786       1 shared_informer.go:318] Caches are synced for service account
	I0911 11:56:13.194330       1 shared_informer.go:318] Caches are synced for resource quota
	I0911 11:56:13.612976       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 11:56:13.648404       1 shared_informer.go:318] Caches are synced for garbage collector
	I0911 11:56:13.648458       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	
	* 
	* ==> kube-proxy [67fefc7cb4b4f93801f1fd166a195ac5bc077f6b7eb8f440ffee93d8b63cc7ea] <==
	* I0911 11:56:01.136716       1 server_others.go:69] "Using iptables proxy"
	I0911 11:56:01.161445       1 node.go:141] Successfully retrieved node IP: 192.168.72.254
	I0911 11:56:01.245861       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 11:56:01.245951       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 11:56:01.250844       1 server_others.go:152] "Using iptables Proxier"
	I0911 11:56:01.250948       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 11:56:01.251126       1 server.go:846] "Version info" version="v1.28.1"
	I0911 11:56:01.251134       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:56:01.254927       1 config.go:188] "Starting service config controller"
	I0911 11:56:01.254970       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 11:56:01.254999       1 config.go:97] "Starting endpoint slice config controller"
	I0911 11:56:01.255003       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 11:56:01.255439       1 config.go:315] "Starting node config controller"
	I0911 11:56:01.255445       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 11:56:01.355350       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 11:56:01.355616       1 shared_informer.go:318] Caches are synced for node config
	I0911 11:56:01.355371       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-proxy [fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580] <==
	* I0911 11:55:38.176426       1 server_others.go:69] "Using iptables proxy"
	E0911 11:55:38.181528       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-474712": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:39.361785       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-474712": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:41.486981       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-474712": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:46.150895       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/pause-474712": dial tcp 192.168.72.254:8443: connect: connection refused
	
	* 
	* ==> kube-scheduler [7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a] <==
	* E0911 11:55:47.246468       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.72.254:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.246676       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://192.168.72.254:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.246726       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.72.254:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.246861       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: Get "https://192.168.72.254:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.246906       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.72.254:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.247066       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: Get "https://192.168.72.254:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.247115       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.72.254:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.248098       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.254:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.248161       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.72.254:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.248128       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: Get "https://192.168.72.254:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.248286       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.72.254:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.248285       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: Get "https://192.168.72.254:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.248353       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.72.254:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.248358       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.72.254:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.248411       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.72.254:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.248416       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: Get "https://192.168.72.254:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.248464       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.72.254:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	W0911 11:55:47.248476       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.72.254:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.248529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.72.254:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.72.254:8443: connect: connection refused
	E0911 11:55:47.494920       1 server.go:214] "waiting for handlers to sync" err="context canceled"
	I0911 11:55:47.495642       1 secure_serving.go:255] Stopped listening on 127.0.0.1:10259
	I0911 11:55:47.495740       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0911 11:55:47.495932       1 shared_informer.go:314] unable to sync caches for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:55:47.495968       1 configmap_cafile_content.go:210] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0911 11:55:47.496061       1 run.go:74] "command failed" err="finished without leader elect"
	
	* 
	* ==> kube-scheduler [9ddb3c53ef7e1826611f2ff1b2198c8921f9d42556ce9a264ff5d4e11edeae24] <==
	* I0911 11:55:57.197218       1 serving.go:348] Generated self-signed cert in-memory
	W0911 11:55:59.920893       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 11:55:59.921021       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 11:55:59.921033       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 11:55:59.921040       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 11:55:59.953495       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0911 11:55:59.953657       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 11:55:59.955878       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0911 11:55:59.959126       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0911 11:55:59.959192       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 11:55:59.959214       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 11:56:00.060484       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 11:53:35 UTC, ends at Mon 2023-09-11 11:56:22 UTC. --
	Sep 11 11:55:54 pause-474712 kubelet[3392]: I0911 11:55:54.849161    3392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9412145c5fb4cf76c08abbc9005ae83d-usr-share-ca-certificates\") pod \"kube-apiserver-pause-474712\" (UID: \"9412145c5fb4cf76c08abbc9005ae83d\") " pod="kube-system/kube-apiserver-pause-474712"
	Sep 11 11:55:54 pause-474712 kubelet[3392]: I0911 11:55:54.849183    3392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ea624c985eb19dbf55c562692246ceca-ca-certs\") pod \"kube-controller-manager-pause-474712\" (UID: \"ea624c985eb19dbf55c562692246ceca\") " pod="kube-system/kube-controller-manager-pause-474712"
	Sep 11 11:55:54 pause-474712 kubelet[3392]: I0911 11:55:54.849200    3392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ea624c985eb19dbf55c562692246ceca-flexvolume-dir\") pod \"kube-controller-manager-pause-474712\" (UID: \"ea624c985eb19dbf55c562692246ceca\") " pod="kube-system/kube-controller-manager-pause-474712"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: E0911 11:55:55.041767    3392 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/pause-474712?timeout=10s\": dial tcp 192.168.72.254:8443: connect: connection refused" interval="800ms"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: I0911 11:55:55.063189    3392 scope.go:117] "RemoveContainer" containerID="52f9b25d22d4ffc0254c2a0878bf77d7fd213206221fbda288a34141f855f585"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: I0911 11:55:55.064701    3392 scope.go:117] "RemoveContainer" containerID="e53a2209537c24d41ab39434a0a69fb4092ffe20f5ad339e6965424141b36fc0"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: I0911 11:55:55.067905    3392 scope.go:117] "RemoveContainer" containerID="7ce23adc307e28e6b83b91b438756f10932bee574d26c0f5d40b29f2bf82123a"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: I0911 11:55:55.068796    3392 scope.go:117] "RemoveContainer" containerID="5442ef36f27e0cc4bacbdbaa91c9320aea9788fc5aaa94f3ee8cbada248b45e9"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: I0911 11:55:55.149288    3392 kubelet_node_status.go:70] "Attempting to register node" node="pause-474712"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: E0911 11:55:55.149746    3392 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://control-plane.minikube.internal:8443/api/v1/nodes\": dial tcp 192.168.72.254:8443: connect: connection refused" node="pause-474712"
	Sep 11 11:55:55 pause-474712 kubelet[3392]: I0911 11:55:55.951009    3392 kubelet_node_status.go:70] "Attempting to register node" node="pause-474712"
	Sep 11 11:55:59 pause-474712 kubelet[3392]: E0911 11:55:59.956352    3392 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-pause-474712\" already exists" pod="kube-system/kube-apiserver-pause-474712"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.010363    3392 kubelet_node_status.go:108] "Node was previously registered" node="pause-474712"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.010489    3392 kubelet_node_status.go:73] "Successfully registered node" node="pause-474712"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.012325    3392 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.014493    3392 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.415257    3392 apiserver.go:52] "Watching apiserver"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.419422    3392 topology_manager.go:215] "Topology Admit Handler" podUID="c812f7b6-5c49-4df6-9d4e-499bf70284b0" podNamespace="kube-system" podName="kube-proxy-9krg2"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.419552    3392 topology_manager.go:215] "Topology Admit Handler" podUID="97d977cc-d0ee-4026-bfa0-61a585f56fd0" podNamespace="kube-system" podName="coredns-5dd5756b68-cdrcf"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.434940    3392 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.496272    3392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c812f7b6-5c49-4df6-9d4e-499bf70284b0-xtables-lock\") pod \"kube-proxy-9krg2\" (UID: \"c812f7b6-5c49-4df6-9d4e-499bf70284b0\") " pod="kube-system/kube-proxy-9krg2"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.496402    3392 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c812f7b6-5c49-4df6-9d4e-499bf70284b0-lib-modules\") pod \"kube-proxy-9krg2\" (UID: \"c812f7b6-5c49-4df6-9d4e-499bf70284b0\") " pod="kube-system/kube-proxy-9krg2"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.720211    3392 scope.go:117] "RemoveContainer" containerID="e92dae80461ab242b808d67d87a7bc49447ebdc7673aba18437f60ae9268905d"
	Sep 11 11:56:00 pause-474712 kubelet[3392]: I0911 11:56:00.721115    3392 scope.go:117] "RemoveContainer" containerID="fa5684fdafe5b6671c79bc36dfc77b5705985e6b3a1ed40f25d4e16754041580"
	Sep 11 11:56:04 pause-474712 kubelet[3392]: I0911 11:56:04.255889    3392 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p pause-474712 -n pause-474712
helpers_test.go:261: (dbg) Run:  kubectl --context pause-474712 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestPause/serial/SecondStartNoReconfiguration FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestPause/serial/SecondStartNoReconfiguration (78.09s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (140.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-352076 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-352076 --alsologtostderr -v=3: exit status 82 (2m1.715258032s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-352076"  ...
	* Stopping node "no-preload-352076"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 12:00:00.801488 2253715 out.go:296] Setting OutFile to fd 1 ...
	I0911 12:00:00.801674 2253715 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:00:00.801688 2253715 out.go:309] Setting ErrFile to fd 2...
	I0911 12:00:00.801695 2253715 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:00:00.802055 2253715 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 12:00:00.802414 2253715 out.go:303] Setting JSON to false
	I0911 12:00:00.802551 2253715 mustload.go:65] Loading cluster: no-preload-352076
	I0911 12:00:00.802960 2253715 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:00:00.803046 2253715 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/config.json ...
	I0911 12:00:00.803242 2253715 mustload.go:65] Loading cluster: no-preload-352076
	I0911 12:00:00.803379 2253715 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:00:00.803435 2253715 stop.go:39] StopHost: no-preload-352076
	I0911 12:00:00.803793 2253715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:00:00.803868 2253715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:00:00.821763 2253715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42235
	I0911 12:00:00.822450 2253715 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:00:00.823170 2253715 main.go:141] libmachine: Using API Version  1
	I0911 12:00:00.823194 2253715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:00:00.823665 2253715 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:00:00.826019 2253715 out.go:177] * Stopping node "no-preload-352076"  ...
	I0911 12:00:00.828667 2253715 main.go:141] libmachine: Stopping "no-preload-352076"...
	I0911 12:00:00.828723 2253715 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:00:00.831306 2253715 main.go:141] libmachine: (no-preload-352076) Calling .Stop
	I0911 12:00:00.836385 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 0/60
	I0911 12:00:01.837811 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 1/60
	I0911 12:00:02.840316 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 2/60
	I0911 12:00:03.841911 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 3/60
	I0911 12:00:04.843413 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 4/60
	I0911 12:00:05.846005 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 5/60
	I0911 12:00:06.847867 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 6/60
	I0911 12:00:07.849372 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 7/60
	I0911 12:00:08.851638 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 8/60
	I0911 12:00:09.853406 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 9/60
	I0911 12:00:10.856016 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 10/60
	I0911 12:00:11.857671 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 11/60
	I0911 12:00:12.859317 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 12/60
	I0911 12:00:13.860929 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 13/60
	I0911 12:00:14.862523 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 14/60
	I0911 12:00:15.864741 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 15/60
	I0911 12:00:16.866407 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 16/60
	I0911 12:00:17.868057 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 17/60
	I0911 12:00:18.869566 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 18/60
	I0911 12:00:19.870921 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 19/60
	I0911 12:00:20.873012 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 20/60
	I0911 12:00:21.874762 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 21/60
	I0911 12:00:22.876937 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 22/60
	I0911 12:00:23.878486 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 23/60
	I0911 12:00:24.880007 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 24/60
	I0911 12:00:25.882469 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 25/60
	I0911 12:00:26.884111 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 26/60
	I0911 12:00:27.885714 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 27/60
	I0911 12:00:28.887275 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 28/60
	I0911 12:00:29.889535 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 29/60
	I0911 12:00:30.891541 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 30/60
	I0911 12:00:31.893044 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 31/60
	I0911 12:00:32.894711 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 32/60
	I0911 12:00:33.896440 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 33/60
	I0911 12:00:34.898014 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 34/60
	I0911 12:00:35.900096 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 35/60
	I0911 12:00:36.901495 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 36/60
	I0911 12:00:37.903181 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 37/60
	I0911 12:00:38.904791 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 38/60
	I0911 12:00:39.906348 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 39/60
	I0911 12:00:40.908037 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 40/60
	I0911 12:00:41.909600 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 41/60
	I0911 12:00:42.911976 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 42/60
	I0911 12:00:43.913554 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 43/60
	I0911 12:00:44.915256 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 44/60
	I0911 12:00:45.918084 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 45/60
	I0911 12:00:46.919820 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 46/60
	I0911 12:00:47.921302 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 47/60
	I0911 12:00:48.923582 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 48/60
	I0911 12:00:49.925069 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 49/60
	I0911 12:00:50.927659 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 50/60
	I0911 12:00:51.930023 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 51/60
	I0911 12:00:52.932179 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 52/60
	I0911 12:00:53.933930 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 53/60
	I0911 12:00:54.935669 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 54/60
	I0911 12:00:55.938208 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 55/60
	I0911 12:00:56.939694 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 56/60
	I0911 12:00:57.941359 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 57/60
	I0911 12:00:58.943230 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 58/60
	I0911 12:00:59.944776 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 59/60
	I0911 12:01:00.946262 2253715 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0911 12:01:00.946336 2253715 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0911 12:01:00.946359 2253715 retry.go:31] will retry after 1.358093922s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0911 12:01:02.305912 2253715 stop.go:39] StopHost: no-preload-352076
	I0911 12:01:02.306402 2253715 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:01:02.306467 2253715 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:01:02.322270 2253715 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41321
	I0911 12:01:02.322781 2253715 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:01:02.323429 2253715 main.go:141] libmachine: Using API Version  1
	I0911 12:01:02.323473 2253715 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:01:02.323878 2253715 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:01:02.326275 2253715 out.go:177] * Stopping node "no-preload-352076"  ...
	I0911 12:01:02.328046 2253715 main.go:141] libmachine: Stopping "no-preload-352076"...
	I0911 12:01:02.328076 2253715 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:01:02.330061 2253715 main.go:141] libmachine: (no-preload-352076) Calling .Stop
	I0911 12:01:02.334443 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 0/60
	I0911 12:01:03.336048 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 1/60
	I0911 12:01:04.337629 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 2/60
	I0911 12:01:05.339228 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 3/60
	I0911 12:01:06.341227 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 4/60
	I0911 12:01:07.343285 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 5/60
	I0911 12:01:08.344941 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 6/60
	I0911 12:01:09.346442 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 7/60
	I0911 12:01:10.348100 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 8/60
	I0911 12:01:11.349732 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 9/60
	I0911 12:01:12.351563 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 10/60
	I0911 12:01:13.353187 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 11/60
	I0911 12:01:14.355807 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 12/60
	I0911 12:01:15.358065 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 13/60
	I0911 12:01:16.359956 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 14/60
	I0911 12:01:17.362321 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 15/60
	I0911 12:01:18.364189 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 16/60
	I0911 12:01:19.366060 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 17/60
	I0911 12:01:20.367574 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 18/60
	I0911 12:01:21.369226 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 19/60
	I0911 12:01:22.371163 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 20/60
	I0911 12:01:23.373274 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 21/60
	I0911 12:01:24.375441 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 22/60
	I0911 12:01:25.377338 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 23/60
	I0911 12:01:26.379766 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 24/60
	I0911 12:01:27.382106 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 25/60
	I0911 12:01:28.383722 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 26/60
	I0911 12:01:29.385109 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 27/60
	I0911 12:01:30.386819 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 28/60
	I0911 12:01:31.388443 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 29/60
	I0911 12:01:32.390425 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 30/60
	I0911 12:01:33.391843 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 31/60
	I0911 12:01:34.393633 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 32/60
	I0911 12:01:35.395636 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 33/60
	I0911 12:01:36.397310 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 34/60
	I0911 12:01:37.399116 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 35/60
	I0911 12:01:38.400809 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 36/60
	I0911 12:01:39.402313 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 37/60
	I0911 12:01:40.404007 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 38/60
	I0911 12:01:41.405500 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 39/60
	I0911 12:01:42.407150 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 40/60
	I0911 12:01:43.408746 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 41/60
	I0911 12:01:44.410456 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 42/60
	I0911 12:01:45.412290 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 43/60
	I0911 12:01:46.413927 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 44/60
	I0911 12:01:47.416096 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 45/60
	I0911 12:01:48.417719 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 46/60
	I0911 12:01:49.419167 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 47/60
	I0911 12:01:50.420675 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 48/60
	I0911 12:01:51.422730 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 49/60
	I0911 12:01:52.424345 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 50/60
	I0911 12:01:53.426140 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 51/60
	I0911 12:01:54.427957 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 52/60
	I0911 12:01:55.429780 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 53/60
	I0911 12:01:56.431434 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 54/60
	I0911 12:01:57.433976 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 55/60
	I0911 12:01:58.436068 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 56/60
	I0911 12:01:59.437781 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 57/60
	I0911 12:02:00.439235 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 58/60
	I0911 12:02:01.440882 2253715 main.go:141] libmachine: (no-preload-352076) Waiting for machine to stop 59/60
	I0911 12:02:02.442116 2253715 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0911 12:02:02.442187 2253715 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0911 12:02:02.444592 2253715 out.go:177] 
	W0911 12:02:02.446432 2253715 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0911 12:02:02.446455 2253715 out.go:239] * 
	* 
	W0911 12:02:02.464553 2253715 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 12:02:02.466204 2253715 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-352076 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-352076 -n no-preload-352076
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-352076 -n no-preload-352076: exit status 3 (18.497124211s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 12:02:20.965239 2254648 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.157:22: connect: no route to host
	E0911 12:02:20.965269 2254648 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.157:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-352076" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (140.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (140.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-235462 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-235462 --alsologtostderr -v=3: exit status 82 (2m1.771142612s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-235462"  ...
	* Stopping node "embed-certs-235462"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 12:00:09.983584 2253804 out.go:296] Setting OutFile to fd 1 ...
	I0911 12:00:09.983964 2253804 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:00:09.984039 2253804 out.go:309] Setting ErrFile to fd 2...
	I0911 12:00:09.984058 2253804 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:00:09.984503 2253804 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 12:00:09.984990 2253804 out.go:303] Setting JSON to false
	I0911 12:00:09.985157 2253804 mustload.go:65] Loading cluster: embed-certs-235462
	I0911 12:00:09.985843 2253804 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:00:09.985950 2253804 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/config.json ...
	I0911 12:00:09.986143 2253804 mustload.go:65] Loading cluster: embed-certs-235462
	I0911 12:00:09.986255 2253804 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:00:09.986295 2253804 stop.go:39] StopHost: embed-certs-235462
	I0911 12:00:09.986661 2253804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:00:09.986714 2253804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:00:10.003337 2253804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46795
	I0911 12:00:10.003866 2253804 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:00:10.004632 2253804 main.go:141] libmachine: Using API Version  1
	I0911 12:00:10.004665 2253804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:00:10.005160 2253804 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:00:10.008114 2253804 out.go:177] * Stopping node "embed-certs-235462"  ...
	I0911 12:00:10.009951 2253804 main.go:141] libmachine: Stopping "embed-certs-235462"...
	I0911 12:00:10.009981 2253804 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:00:10.012121 2253804 main.go:141] libmachine: (embed-certs-235462) Calling .Stop
	I0911 12:00:10.016395 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 0/60
	I0911 12:00:11.017804 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 1/60
	I0911 12:00:12.019744 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 2/60
	I0911 12:00:13.021560 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 3/60
	I0911 12:00:14.023130 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 4/60
	I0911 12:00:15.025455 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 5/60
	I0911 12:00:16.027570 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 6/60
	I0911 12:00:17.029165 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 7/60
	I0911 12:00:18.030909 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 8/60
	I0911 12:00:19.032631 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 9/60
	I0911 12:00:20.034197 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 10/60
	I0911 12:00:21.036605 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 11/60
	I0911 12:00:22.038476 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 12/60
	I0911 12:00:23.040714 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 13/60
	I0911 12:00:24.042713 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 14/60
	I0911 12:00:25.045305 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 15/60
	I0911 12:00:26.047301 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 16/60
	I0911 12:00:27.049537 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 17/60
	I0911 12:00:28.051352 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 18/60
	I0911 12:00:29.052913 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 19/60
	I0911 12:00:30.054652 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 20/60
	I0911 12:00:31.056726 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 21/60
	I0911 12:00:32.058369 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 22/60
	I0911 12:00:33.059955 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 23/60
	I0911 12:00:34.061555 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 24/60
	I0911 12:00:35.063904 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 25/60
	I0911 12:00:36.065695 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 26/60
	I0911 12:00:37.067374 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 27/60
	I0911 12:00:38.068843 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 28/60
	I0911 12:00:39.070536 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 29/60
	I0911 12:00:40.073038 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 30/60
	I0911 12:00:41.074741 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 31/60
	I0911 12:00:42.076448 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 32/60
	I0911 12:00:43.078064 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 33/60
	I0911 12:00:44.080650 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 34/60
	I0911 12:00:45.083008 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 35/60
	I0911 12:00:46.084606 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 36/60
	I0911 12:00:47.086163 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 37/60
	I0911 12:00:48.087878 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 38/60
	I0911 12:00:49.090462 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 39/60
	I0911 12:00:50.092745 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 40/60
	I0911 12:00:51.094202 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 41/60
	I0911 12:00:52.095931 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 42/60
	I0911 12:00:53.097685 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 43/60
	I0911 12:00:54.099442 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 44/60
	I0911 12:00:55.101517 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 45/60
	I0911 12:00:56.103040 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 46/60
	I0911 12:00:57.104621 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 47/60
	I0911 12:00:58.106119 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 48/60
	I0911 12:00:59.107544 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 49/60
	I0911 12:01:00.109971 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 50/60
	I0911 12:01:01.111715 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 51/60
	I0911 12:01:02.113425 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 52/60
	I0911 12:01:03.114934 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 53/60
	I0911 12:01:04.116751 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 54/60
	I0911 12:01:05.118893 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 55/60
	I0911 12:01:06.120636 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 56/60
	I0911 12:01:07.122152 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 57/60
	I0911 12:01:08.123730 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 58/60
	I0911 12:01:09.125307 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 59/60
	I0911 12:01:10.125901 2253804 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0911 12:01:10.125975 2253804 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0911 12:01:10.126005 2253804 retry.go:31] will retry after 1.414986146s: Temporary Error: stop: unable to stop vm, current state "Running"
	I0911 12:01:11.541993 2253804 stop.go:39] StopHost: embed-certs-235462
	I0911 12:01:11.542489 2253804 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:01:11.542545 2253804 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:01:11.558582 2253804 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35465
	I0911 12:01:11.559127 2253804 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:01:11.559764 2253804 main.go:141] libmachine: Using API Version  1
	I0911 12:01:11.559792 2253804 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:01:11.560230 2253804 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:01:11.562219 2253804 out.go:177] * Stopping node "embed-certs-235462"  ...
	I0911 12:01:11.563801 2253804 main.go:141] libmachine: Stopping "embed-certs-235462"...
	I0911 12:01:11.563825 2253804 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:01:11.565728 2253804 main.go:141] libmachine: (embed-certs-235462) Calling .Stop
	I0911 12:01:11.569954 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 0/60
	I0911 12:01:12.571413 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 1/60
	I0911 12:01:13.573274 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 2/60
	I0911 12:01:14.575899 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 3/60
	I0911 12:01:15.577681 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 4/60
	I0911 12:01:16.579988 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 5/60
	I0911 12:01:17.581485 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 6/60
	I0911 12:01:18.583590 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 7/60
	I0911 12:01:19.585439 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 8/60
	I0911 12:01:20.586790 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 9/60
	I0911 12:01:21.588913 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 10/60
	I0911 12:01:22.590356 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 11/60
	I0911 12:01:23.592044 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 12/60
	I0911 12:01:24.593570 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 13/60
	I0911 12:01:25.595660 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 14/60
	I0911 12:01:26.598435 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 15/60
	I0911 12:01:27.599829 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 16/60
	I0911 12:01:28.601330 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 17/60
	I0911 12:01:29.603610 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 18/60
	I0911 12:01:30.605640 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 19/60
	I0911 12:01:31.607955 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 20/60
	I0911 12:01:32.609661 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 21/60
	I0911 12:01:33.612106 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 22/60
	I0911 12:01:34.613841 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 23/60
	I0911 12:01:35.615776 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 24/60
	I0911 12:01:36.617487 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 25/60
	I0911 12:01:37.618896 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 26/60
	I0911 12:01:38.620527 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 27/60
	I0911 12:01:39.621971 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 28/60
	I0911 12:01:40.623795 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 29/60
	I0911 12:01:41.626432 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 30/60
	I0911 12:01:42.627990 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 31/60
	I0911 12:01:43.629686 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 32/60
	I0911 12:01:44.631374 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 33/60
	I0911 12:01:45.634038 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 34/60
	I0911 12:01:46.636007 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 35/60
	I0911 12:01:47.637638 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 36/60
	I0911 12:01:48.639613 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 37/60
	I0911 12:01:49.641664 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 38/60
	I0911 12:01:50.643596 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 39/60
	I0911 12:01:51.645239 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 40/60
	I0911 12:01:52.646814 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 41/60
	I0911 12:01:53.648427 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 42/60
	I0911 12:01:54.650689 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 43/60
	I0911 12:01:55.652444 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 44/60
	I0911 12:01:56.655025 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 45/60
	I0911 12:01:57.656510 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 46/60
	I0911 12:01:58.658418 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 47/60
	I0911 12:01:59.661056 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 48/60
	I0911 12:02:00.662854 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 49/60
	I0911 12:02:01.665160 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 50/60
	I0911 12:02:02.667652 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 51/60
	I0911 12:02:03.669337 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 52/60
	I0911 12:02:04.670924 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 53/60
	I0911 12:02:05.672474 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 54/60
	I0911 12:02:06.674555 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 55/60
	I0911 12:02:07.676214 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 56/60
	I0911 12:02:08.677852 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 57/60
	I0911 12:02:09.679688 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 58/60
	I0911 12:02:10.681257 2253804 main.go:141] libmachine: (embed-certs-235462) Waiting for machine to stop 59/60
	I0911 12:02:11.682331 2253804 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0911 12:02:11.682382 2253804 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0911 12:02:11.684622 2253804 out.go:177] 
	W0911 12:02:11.686079 2253804 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0911 12:02:11.686105 2253804 out.go:239] * 
	* 
	W0911 12:02:11.704063 2253804 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 12:02:11.705937 2253804 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-235462 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-235462 -n embed-certs-235462
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-235462 -n embed-certs-235462: exit status 3 (18.472682183s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 12:02:30.181200 2254700 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host
	E0911 12:02:30.181227 2254700 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-235462" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (140.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (140.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-642215 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p old-k8s-version-642215 --alsologtostderr -v=3: exit status 82 (2m1.331949436s)

                                                
                                                
-- stdout --
	* Stopping node "old-k8s-version-642215"  ...
	* Stopping node "old-k8s-version-642215"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 12:00:20.721166 2253919 out.go:296] Setting OutFile to fd 1 ...
	I0911 12:00:20.721324 2253919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:00:20.721337 2253919 out.go:309] Setting ErrFile to fd 2...
	I0911 12:00:20.721345 2253919 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:00:20.721561 2253919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 12:00:20.721842 2253919 out.go:303] Setting JSON to false
	I0911 12:00:20.721938 2253919 mustload.go:65] Loading cluster: old-k8s-version-642215
	I0911 12:00:20.722282 2253919 config.go:182] Loaded profile config "old-k8s-version-642215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0911 12:00:20.722393 2253919 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/config.json ...
	I0911 12:00:20.722578 2253919 mustload.go:65] Loading cluster: old-k8s-version-642215
	I0911 12:00:20.722711 2253919 config.go:182] Loaded profile config "old-k8s-version-642215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0911 12:00:20.722756 2253919 stop.go:39] StopHost: old-k8s-version-642215
	I0911 12:00:20.723119 2253919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:00:20.723190 2253919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:00:20.740027 2253919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40153
	I0911 12:00:20.740569 2253919 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:00:20.741275 2253919 main.go:141] libmachine: Using API Version  1
	I0911 12:00:20.741301 2253919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:00:20.741733 2253919 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:00:20.744788 2253919 out.go:177] * Stopping node "old-k8s-version-642215"  ...
	I0911 12:00:20.746357 2253919 main.go:141] libmachine: Stopping "old-k8s-version-642215"...
	I0911 12:00:20.746393 2253919 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:00:20.748585 2253919 main.go:141] libmachine: (old-k8s-version-642215) Calling .Stop
	I0911 12:00:20.752571 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 0/60
	I0911 12:00:21.754391 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 1/60
	I0911 12:00:22.756029 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 2/60
	I0911 12:00:23.757530 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 3/60
	I0911 12:00:24.759229 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 4/60
	I0911 12:00:25.761836 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 5/60
	I0911 12:00:26.763260 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 6/60
	I0911 12:00:27.765271 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 7/60
	I0911 12:00:28.766970 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 8/60
	I0911 12:00:29.768737 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 9/60
	I0911 12:00:30.770580 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 10/60
	I0911 12:00:31.772351 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 11/60
	I0911 12:00:32.773996 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 12/60
	I0911 12:00:33.775961 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 13/60
	I0911 12:00:34.777433 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 14/60
	I0911 12:00:35.779369 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 15/60
	I0911 12:00:36.781662 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 16/60
	I0911 12:00:37.783587 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 17/60
	I0911 12:00:38.785244 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 18/60
	I0911 12:00:39.787766 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 19/60
	I0911 12:00:40.789340 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 20/60
	I0911 12:00:41.791059 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 21/60
	I0911 12:00:42.792807 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 22/60
	I0911 12:00:43.794395 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 23/60
	I0911 12:00:44.796443 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 24/60
	I0911 12:00:45.798755 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 25/60
	I0911 12:00:46.800406 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 26/60
	I0911 12:00:47.801896 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 27/60
	I0911 12:00:48.803281 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 28/60
	I0911 12:00:49.805155 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 29/60
	I0911 12:00:50.807181 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 30/60
	I0911 12:00:51.809066 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 31/60
	I0911 12:00:52.811893 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 32/60
	I0911 12:00:53.813534 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 33/60
	I0911 12:00:54.815291 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 34/60
	I0911 12:00:55.817372 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 35/60
	I0911 12:00:56.819917 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 36/60
	I0911 12:00:57.821948 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 37/60
	I0911 12:00:58.823366 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 38/60
	I0911 12:00:59.825048 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 39/60
	I0911 12:01:00.826749 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 40/60
	I0911 12:01:01.828234 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 41/60
	I0911 12:01:02.829734 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 42/60
	I0911 12:01:03.831141 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 43/60
	I0911 12:01:04.832707 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 44/60
	I0911 12:01:05.835117 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 45/60
	I0911 12:01:06.836761 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 46/60
	I0911 12:01:07.838548 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 47/60
	I0911 12:01:08.840118 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 48/60
	I0911 12:01:09.841896 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 49/60
	I0911 12:01:10.844507 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 50/60
	I0911 12:01:11.846448 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 51/60
	I0911 12:01:12.847641 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 52/60
	I0911 12:01:13.849904 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 53/60
	I0911 12:01:15.240987 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 54/60
	I0911 12:01:16.243783 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 55/60
	I0911 12:01:17.246329 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 56/60
	I0911 12:01:18.247921 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 57/60
	I0911 12:01:19.249609 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 58/60
	I0911 12:01:20.251578 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 59/60
	I0911 12:01:21.252173 2253919 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0911 12:01:21.252251 2253919 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0911 12:01:21.252280 2253919 retry.go:31] will retry after 585.859132ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0911 12:01:21.839124 2253919 stop.go:39] StopHost: old-k8s-version-642215
	I0911 12:01:21.839729 2253919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:01:21.839807 2253919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:01:21.854880 2253919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41021
	I0911 12:01:21.855511 2253919 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:01:21.856206 2253919 main.go:141] libmachine: Using API Version  1
	I0911 12:01:21.856239 2253919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:01:21.856637 2253919 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:01:21.858884 2253919 out.go:177] * Stopping node "old-k8s-version-642215"  ...
	I0911 12:01:21.860655 2253919 main.go:141] libmachine: Stopping "old-k8s-version-642215"...
	I0911 12:01:21.860678 2253919 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:01:21.862487 2253919 main.go:141] libmachine: (old-k8s-version-642215) Calling .Stop
	I0911 12:01:21.866107 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 0/60
	I0911 12:01:22.868094 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 1/60
	I0911 12:01:23.869417 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 2/60
	I0911 12:01:24.871715 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 3/60
	I0911 12:01:25.873559 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 4/60
	I0911 12:01:26.875964 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 5/60
	I0911 12:01:27.877629 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 6/60
	I0911 12:01:28.879696 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 7/60
	I0911 12:01:29.882189 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 8/60
	I0911 12:01:30.884061 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 9/60
	I0911 12:01:31.886337 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 10/60
	I0911 12:01:32.888049 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 11/60
	I0911 12:01:33.889853 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 12/60
	I0911 12:01:34.891490 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 13/60
	I0911 12:01:35.893239 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 14/60
	I0911 12:01:36.895165 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 15/60
	I0911 12:01:37.896737 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 16/60
	I0911 12:01:38.898202 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 17/60
	I0911 12:01:39.899668 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 18/60
	I0911 12:01:40.901689 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 19/60
	I0911 12:01:41.903603 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 20/60
	I0911 12:01:42.905266 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 21/60
	I0911 12:01:43.907619 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 22/60
	I0911 12:01:44.909572 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 23/60
	I0911 12:01:45.911278 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 24/60
	I0911 12:01:46.913392 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 25/60
	I0911 12:01:47.915302 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 26/60
	I0911 12:01:48.916842 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 27/60
	I0911 12:01:49.918321 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 28/60
	I0911 12:01:50.919866 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 29/60
	I0911 12:01:51.922148 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 30/60
	I0911 12:01:52.923850 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 31/60
	I0911 12:01:53.925434 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 32/60
	I0911 12:01:54.927075 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 33/60
	I0911 12:01:55.929046 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 34/60
	I0911 12:01:56.930924 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 35/60
	I0911 12:01:57.932474 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 36/60
	I0911 12:01:58.934456 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 37/60
	I0911 12:01:59.936125 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 38/60
	I0911 12:02:00.937866 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 39/60
	I0911 12:02:01.940067 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 40/60
	I0911 12:02:02.941474 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 41/60
	I0911 12:02:03.943034 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 42/60
	I0911 12:02:04.945130 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 43/60
	I0911 12:02:05.947748 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 44/60
	I0911 12:02:06.949778 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 45/60
	I0911 12:02:07.951410 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 46/60
	I0911 12:02:08.953082 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 47/60
	I0911 12:02:09.955373 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 48/60
	I0911 12:02:10.956991 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 49/60
	I0911 12:02:11.958905 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 50/60
	I0911 12:02:12.960654 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 51/60
	I0911 12:02:13.962057 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 52/60
	I0911 12:02:14.963620 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 53/60
	I0911 12:02:15.965792 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 54/60
	I0911 12:02:16.967744 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 55/60
	I0911 12:02:17.969154 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 56/60
	I0911 12:02:18.971442 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 57/60
	I0911 12:02:19.972761 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 58/60
	I0911 12:02:20.974343 2253919 main.go:141] libmachine: (old-k8s-version-642215) Waiting for machine to stop 59/60
	I0911 12:02:21.975479 2253919 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0911 12:02:21.975540 2253919 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0911 12:02:21.978576 2253919 out.go:177] 
	W0911 12:02:21.980759 2253919 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0911 12:02:21.980783 2253919 out.go:239] * 
	* 
	W0911 12:02:22.000336 2253919 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 12:02:22.002425 2253919 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p old-k8s-version-642215 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642215 -n old-k8s-version-642215
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642215 -n old-k8s-version-642215: exit status 3 (18.671675898s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 12:02:40.677169 2254847 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host
	E0911 12:02:40.677193 2254847 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-642215" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/Stop (140.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-352076 -n no-preload-352076
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-352076 -n no-preload-352076: exit status 3 (3.16777811s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 12:02:24.133290 2254818 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.157:22: connect: no route to host
	E0911 12:02:24.133321 2254818 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.157:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-352076 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-352076 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.175925845s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.72.157:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-352076 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-352076 -n no-preload-352076
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-352076 -n no-preload-352076: exit status 3 (3.040385887s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 12:02:33.349225 2255017 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.72.157:22: connect: no route to host
	E0911 12:02:33.349241 2255017 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.72.157:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-352076" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.36s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-484027 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-484027 --alsologtostderr -v=3: exit status 82 (2m0.891063611s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-484027"  ...
	* Stopping node "default-k8s-diff-port-484027"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 12:02:26.296487 2254958 out.go:296] Setting OutFile to fd 1 ...
	I0911 12:02:26.296691 2254958 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:02:26.296705 2254958 out.go:309] Setting ErrFile to fd 2...
	I0911 12:02:26.296712 2254958 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:02:26.297055 2254958 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 12:02:26.297432 2254958 out.go:303] Setting JSON to false
	I0911 12:02:26.297558 2254958 mustload.go:65] Loading cluster: default-k8s-diff-port-484027
	I0911 12:02:26.298070 2254958 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:02:26.298211 2254958 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/config.json ...
	I0911 12:02:26.298446 2254958 mustload.go:65] Loading cluster: default-k8s-diff-port-484027
	I0911 12:02:26.298628 2254958 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:02:26.298706 2254958 stop.go:39] StopHost: default-k8s-diff-port-484027
	I0911 12:02:26.299293 2254958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:02:26.299366 2254958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:02:26.314341 2254958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33127
	I0911 12:02:26.314906 2254958 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:02:26.315607 2254958 main.go:141] libmachine: Using API Version  1
	I0911 12:02:26.315638 2254958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:02:26.316170 2254958 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:02:26.318883 2254958 out.go:177] * Stopping node "default-k8s-diff-port-484027"  ...
	I0911 12:02:26.320521 2254958 main.go:141] libmachine: Stopping "default-k8s-diff-port-484027"...
	I0911 12:02:26.320553 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:02:26.322298 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Stop
	I0911 12:02:26.325746 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 0/60
	I0911 12:02:27.327295 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 1/60
	I0911 12:02:28.329026 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 2/60
	I0911 12:02:29.330770 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 3/60
	I0911 12:02:30.332507 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 4/60
	I0911 12:02:31.334798 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 5/60
	I0911 12:02:32.336331 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 6/60
	I0911 12:02:33.338031 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 7/60
	I0911 12:02:34.339724 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 8/60
	I0911 12:02:35.341444 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 9/60
	I0911 12:02:36.343229 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 10/60
	I0911 12:02:37.344796 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 11/60
	I0911 12:02:38.346634 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 12/60
	I0911 12:02:39.348419 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 13/60
	I0911 12:02:40.350300 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 14/60
	I0911 12:02:41.352694 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 15/60
	I0911 12:02:42.354318 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 16/60
	I0911 12:02:43.356000 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 17/60
	I0911 12:02:44.357567 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 18/60
	I0911 12:02:45.359198 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 19/60
	I0911 12:02:46.361166 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 20/60
	I0911 12:02:47.363006 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 21/60
	I0911 12:02:48.364614 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 22/60
	I0911 12:02:49.366254 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 23/60
	I0911 12:02:50.367796 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 24/60
	I0911 12:02:51.370149 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 25/60
	I0911 12:02:52.371800 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 26/60
	I0911 12:02:53.373318 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 27/60
	I0911 12:02:54.375141 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 28/60
	I0911 12:02:55.376793 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 29/60
	I0911 12:02:56.379502 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 30/60
	I0911 12:02:57.381237 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 31/60
	I0911 12:02:58.382723 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 32/60
	I0911 12:02:59.384215 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 33/60
	I0911 12:03:00.385787 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 34/60
	I0911 12:03:01.388015 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 35/60
	I0911 12:03:02.389685 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 36/60
	I0911 12:03:03.391725 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 37/60
	I0911 12:03:04.393682 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 38/60
	I0911 12:03:05.395275 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 39/60
	I0911 12:03:06.397054 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 40/60
	I0911 12:03:07.398611 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 41/60
	I0911 12:03:08.400285 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 42/60
	I0911 12:03:09.402078 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 43/60
	I0911 12:03:10.403795 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 44/60
	I0911 12:03:11.406077 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 45/60
	I0911 12:03:12.407876 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 46/60
	I0911 12:03:13.409753 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 47/60
	I0911 12:03:14.411477 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 48/60
	I0911 12:03:15.413433 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 49/60
	I0911 12:03:16.415071 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 50/60
	I0911 12:03:17.416727 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 51/60
	I0911 12:03:18.418254 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 52/60
	I0911 12:03:19.419903 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 53/60
	I0911 12:03:20.421372 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 54/60
	I0911 12:03:21.423781 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 55/60
	I0911 12:03:22.425349 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 56/60
	I0911 12:03:23.427297 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 57/60
	I0911 12:03:24.428737 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 58/60
	I0911 12:03:25.430547 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 59/60
	I0911 12:03:26.431962 2254958 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0911 12:03:26.432020 2254958 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0911 12:03:26.432063 2254958 retry.go:31] will retry after 553.543657ms: Temporary Error: stop: unable to stop vm, current state "Running"
	I0911 12:03:26.985796 2254958 stop.go:39] StopHost: default-k8s-diff-port-484027
	I0911 12:03:26.986268 2254958 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:03:26.986334 2254958 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:03:27.001841 2254958 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41881
	I0911 12:03:27.002346 2254958 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:03:27.002950 2254958 main.go:141] libmachine: Using API Version  1
	I0911 12:03:27.002975 2254958 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:03:27.003347 2254958 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:03:27.005424 2254958 out.go:177] * Stopping node "default-k8s-diff-port-484027"  ...
	I0911 12:03:27.007001 2254958 main.go:141] libmachine: Stopping "default-k8s-diff-port-484027"...
	I0911 12:03:27.007023 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:03:27.008855 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Stop
	I0911 12:03:27.012509 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 0/60
	I0911 12:03:28.014012 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 1/60
	I0911 12:03:29.015675 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 2/60
	I0911 12:03:30.017264 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 3/60
	I0911 12:03:31.018914 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 4/60
	I0911 12:03:32.020797 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 5/60
	I0911 12:03:33.022486 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 6/60
	I0911 12:03:34.024136 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 7/60
	I0911 12:03:35.026218 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 8/60
	I0911 12:03:36.027758 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 9/60
	I0911 12:03:37.029990 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 10/60
	I0911 12:03:38.031461 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 11/60
	I0911 12:03:39.033175 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 12/60
	I0911 12:03:40.034771 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 13/60
	I0911 12:03:41.036475 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 14/60
	I0911 12:03:42.038382 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 15/60
	I0911 12:03:43.040102 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 16/60
	I0911 12:03:44.041832 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 17/60
	I0911 12:03:45.043328 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 18/60
	I0911 12:03:46.045092 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 19/60
	I0911 12:03:47.047118 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 20/60
	I0911 12:03:48.048691 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 21/60
	I0911 12:03:49.050218 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 22/60
	I0911 12:03:50.051978 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 23/60
	I0911 12:03:51.053775 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 24/60
	I0911 12:03:52.055839 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 25/60
	I0911 12:03:53.057632 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 26/60
	I0911 12:03:54.059235 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 27/60
	I0911 12:03:55.061020 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 28/60
	I0911 12:03:56.062695 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 29/60
	I0911 12:03:57.065098 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 30/60
	I0911 12:03:58.066743 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 31/60
	I0911 12:03:59.068303 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 32/60
	I0911 12:04:00.069931 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 33/60
	I0911 12:04:01.071672 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 34/60
	I0911 12:04:02.073708 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 35/60
	I0911 12:04:03.075698 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 36/60
	I0911 12:04:04.077524 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 37/60
	I0911 12:04:05.079117 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 38/60
	I0911 12:04:06.080711 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 39/60
	I0911 12:04:07.082468 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 40/60
	I0911 12:04:08.083947 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 41/60
	I0911 12:04:09.085544 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 42/60
	I0911 12:04:10.086942 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 43/60
	I0911 12:04:11.088700 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 44/60
	I0911 12:04:12.091111 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 45/60
	I0911 12:04:13.092511 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 46/60
	I0911 12:04:14.094001 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 47/60
	I0911 12:04:15.095773 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 48/60
	I0911 12:04:16.097447 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 49/60
	I0911 12:04:17.099990 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 50/60
	I0911 12:04:18.101339 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 51/60
	I0911 12:04:19.102962 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 52/60
	I0911 12:04:20.104478 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 53/60
	I0911 12:04:21.105946 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 54/60
	I0911 12:04:22.107939 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 55/60
	I0911 12:04:23.109433 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 56/60
	I0911 12:04:24.111094 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 57/60
	I0911 12:04:25.112707 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 58/60
	I0911 12:04:26.114422 2254958 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for machine to stop 59/60
	I0911 12:04:27.115564 2254958 stop.go:59] stop err: unable to stop vm, current state "Running"
	W0911 12:04:27.115617 2254958 stop.go:163] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0911 12:04:27.118041 2254958 out.go:177] 
	W0911 12:04:27.119718 2254958 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0911 12:04:27.119735 2254958 out.go:239] * 
	* 
	W0911 12:04:27.137169 2254958 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_5.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0911 12:04:27.139957 2254958 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-484027 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027: exit status 3 (18.463216239s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 12:04:45.605256 2255642 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	E0911 12:04:45.605280 2255642 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-484027" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.36s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-235462 -n embed-certs-235462
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-235462 -n embed-certs-235462: exit status 3 (3.168245025s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 12:02:33.349220 2254987 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host
	E0911 12:02:33.349240 2254987 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-235462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-235462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.168700343s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-235462 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-235462 -n embed-certs-235462
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-235462 -n embed-certs-235462: exit status 3 (3.046386414s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 12:02:42.565357 2255116 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host
	E0911 12:02:42.565392 2255116 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.96:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-235462" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642215 -n old-k8s-version-642215
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642215 -n old-k8s-version-642215: exit status 3 (3.167783105s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 12:02:43.845346 2255157 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host
	E0911 12:02:43.845372 2255157 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-642215 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-642215 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.170459123s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-642215 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642215 -n old-k8s-version-642215
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642215 -n old-k8s-version-642215: exit status 3 (3.04522742s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 12:02:53.061292 2255261 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host
	E0911 12:02:53.061311 2255261 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.58:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "old-k8s-version-642215" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027: exit status 3 (3.167572192s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 12:04:48.773223 2255716 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	E0911 12:04:48.773252 2255716 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-484027 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-484027 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.170338707s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_1.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-484027 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027: exit status 3 (3.045540727s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0911 12:04:57.989284 2255773 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host
	E0911 12:04:57.989306 2255773 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.230:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-484027" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.35s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0911 12:08:47.569140 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 12:09:15.053594 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 12:11:22.842114 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642215 -n old-k8s-version-642215
start_stop_delete_test.go:274: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-11 12:17:41.232482178 +0000 UTC m=+4866.545107064
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642215 -n old-k8s-version-642215
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-642215 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-642215 logs -n 25: (1.644662395s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 11:57 UTC | 11 Sep 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| ssh     | cert-options-559775 ssh                                | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-559775 -- sudo                         | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559775                                 | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-352076             | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 11:59 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-235462            | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642215        | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:01 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-226537 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	|         | disable-driver-mounts-226537                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:02 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-352076                  | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-484027  | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-235462                 | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642215             | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-484027       | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC | 11 Sep 23 12:13 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 12:04:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 12:04:58.034724 2255814 out.go:296] Setting OutFile to fd 1 ...
	I0911 12:04:58.034920 2255814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:04:58.034929 2255814 out.go:309] Setting ErrFile to fd 2...
	I0911 12:04:58.034933 2255814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:04:58.035102 2255814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 12:04:58.035709 2255814 out.go:303] Setting JSON to false
	I0911 12:04:58.036651 2255814 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":236849,"bootTime":1694197049,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 12:04:58.036727 2255814 start.go:138] virtualization: kvm guest
	I0911 12:04:58.039239 2255814 out.go:177] * [default-k8s-diff-port-484027] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 12:04:58.041110 2255814 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 12:04:58.041181 2255814 notify.go:220] Checking for updates...
	I0911 12:04:58.042795 2255814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 12:04:58.044550 2255814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:04:58.046047 2255814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 12:04:58.047718 2255814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 12:04:58.049343 2255814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 12:04:58.051545 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:04:58.051956 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:04:58.052047 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:04:58.068212 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0911 12:04:58.068649 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:04:58.069289 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:04:58.069318 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:04:58.069763 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:04:58.069987 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:04:58.070276 2255814 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 12:04:58.070629 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:04:58.070670 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:04:58.085941 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0911 12:04:58.086461 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:04:58.086966 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:04:58.086995 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:04:58.087337 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:04:58.087522 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:04:58.126206 2255814 out.go:177] * Using the kvm2 driver based on existing profile
	I0911 12:04:58.127558 2255814 start.go:298] selected driver: kvm2
	I0911 12:04:58.127571 2255814 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:04:58.127716 2255814 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 12:04:58.128430 2255814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:04:58.128555 2255814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 12:04:58.144660 2255814 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 12:04:58.145091 2255814 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 12:04:58.145145 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:04:58.145159 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:04:58.145176 2255814 start_flags.go:321] config:
	{Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-48402
7 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:04:58.145377 2255814 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:04:58.147634 2255814 out.go:177] * Starting control plane node default-k8s-diff-port-484027 in cluster default-k8s-diff-port-484027
	I0911 12:04:56.741109 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:04:58.149438 2255814 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:04:58.149510 2255814 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 12:04:58.149543 2255814 cache.go:57] Caching tarball of preloaded images
	I0911 12:04:58.149650 2255814 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 12:04:58.149664 2255814 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 12:04:58.149825 2255814 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/config.json ...
	I0911 12:04:58.150070 2255814 start.go:365] acquiring machines lock for default-k8s-diff-port-484027: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:04:59.813165 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:05.893188 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:08.965171 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:15.045168 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:18.117188 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:24.197148 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:27.269089 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:33.349151 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:36.421191 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:42.501129 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:45.573209 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:51.653159 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:54.725153 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:00.805201 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:03.877105 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:09.957136 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:13.029119 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:19.109157 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:22.181096 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:28.261156 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:31.333179 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:37.413187 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:40.485239 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:46.565193 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:49.637182 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:55.717194 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:58.789181 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:04.869137 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:07.941096 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:10.946790 2255187 start.go:369] acquired machines lock for "embed-certs-235462" in 4m28.227506413s
	I0911 12:07:10.946859 2255187 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:10.946884 2255187 fix.go:54] fixHost starting: 
	I0911 12:07:10.947279 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:10.947318 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:10.963823 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43713
	I0911 12:07:10.964352 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:10.965050 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:07:10.965086 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:10.965556 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:10.965804 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:10.965995 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:07:10.967759 2255187 fix.go:102] recreateIfNeeded on embed-certs-235462: state=Stopped err=<nil>
	I0911 12:07:10.967790 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	W0911 12:07:10.968000 2255187 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:10.970103 2255187 out.go:177] * Restarting existing kvm2 VM for "embed-certs-235462" ...
	I0911 12:07:10.971879 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Start
	I0911 12:07:10.972130 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring networks are active...
	I0911 12:07:10.973084 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring network default is active
	I0911 12:07:10.973424 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring network mk-embed-certs-235462 is active
	I0911 12:07:10.973888 2255187 main.go:141] libmachine: (embed-certs-235462) Getting domain xml...
	I0911 12:07:10.974726 2255187 main.go:141] libmachine: (embed-certs-235462) Creating domain...
	I0911 12:07:12.246736 2255187 main.go:141] libmachine: (embed-certs-235462) Waiting to get IP...
	I0911 12:07:12.247648 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.248019 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.248152 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.248016 2256167 retry.go:31] will retry after 245.040457ms: waiting for machine to come up
	I0911 12:07:12.494788 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.495311 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.495345 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.495247 2256167 retry.go:31] will retry after 312.634812ms: waiting for machine to come up
	I0911 12:07:10.943345 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:10.943403 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:07:10.946565 2255048 machine.go:91] provisioned docker machine in 4m37.405921901s
	I0911 12:07:10.946641 2255048 fix.go:56] fixHost completed within 4m37.430192233s
	I0911 12:07:10.946648 2255048 start.go:83] releasing machines lock for "no-preload-352076", held for 4m37.430236677s
	W0911 12:07:10.946673 2255048 start.go:672] error starting host: provision: host is not running
	W0911 12:07:10.946819 2255048 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0911 12:07:10.946833 2255048 start.go:687] Will try again in 5 seconds ...
	I0911 12:07:12.810038 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.810461 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.810496 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.810398 2256167 retry.go:31] will retry after 478.036066ms: waiting for machine to come up
	I0911 12:07:13.290252 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:13.290701 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:13.290731 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:13.290646 2256167 retry.go:31] will retry after 576.124591ms: waiting for machine to come up
	I0911 12:07:13.868555 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:13.868978 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:13.869004 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:13.868931 2256167 retry.go:31] will retry after 487.107859ms: waiting for machine to come up
	I0911 12:07:14.357765 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:14.358240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:14.358315 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:14.358173 2256167 retry.go:31] will retry after 903.857312ms: waiting for machine to come up
	I0911 12:07:15.263350 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:15.263852 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:15.263908 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:15.263777 2256167 retry.go:31] will retry after 830.555039ms: waiting for machine to come up
	I0911 12:07:16.096337 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:16.096743 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:16.096774 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:16.096696 2256167 retry.go:31] will retry after 1.307188723s: waiting for machine to come up
	I0911 12:07:17.406129 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:17.406558 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:17.406584 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:17.406512 2256167 retry.go:31] will retry after 1.681887732s: waiting for machine to come up
	I0911 12:07:15.947503 2255048 start.go:365] acquiring machines lock for no-preload-352076: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:07:19.090590 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:19.091013 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:19.091038 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:19.090965 2256167 retry.go:31] will retry after 2.013298988s: waiting for machine to come up
	I0911 12:07:21.105851 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:21.106384 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:21.106418 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:21.106319 2256167 retry.go:31] will retry after 2.714578164s: waiting for machine to come up
	I0911 12:07:23.823181 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:23.823687 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:23.823772 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:23.823623 2256167 retry.go:31] will retry after 2.321779277s: waiting for machine to come up
	I0911 12:07:26.147527 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:26.147956 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:26.147986 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:26.147896 2256167 retry.go:31] will retry after 4.307300197s: waiting for machine to come up
	I0911 12:07:31.786165 2255304 start.go:369] acquired machines lock for "old-k8s-version-642215" in 4m38.564304718s
	I0911 12:07:31.786239 2255304 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:31.786261 2255304 fix.go:54] fixHost starting: 
	I0911 12:07:31.786754 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:31.786809 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:31.806853 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0911 12:07:31.807320 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:31.807871 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:07:31.807906 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:31.808284 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:31.808473 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:31.808622 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:07:31.810311 2255304 fix.go:102] recreateIfNeeded on old-k8s-version-642215: state=Stopped err=<nil>
	I0911 12:07:31.810345 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	W0911 12:07:31.810524 2255304 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:31.813302 2255304 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-642215" ...
	I0911 12:07:30.458075 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.458554 2255187 main.go:141] libmachine: (embed-certs-235462) Found IP for machine: 192.168.50.96
	I0911 12:07:30.458579 2255187 main.go:141] libmachine: (embed-certs-235462) Reserving static IP address...
	I0911 12:07:30.458593 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has current primary IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.459036 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "embed-certs-235462", mac: "52:54:00:2b:a0:6e", ip: "192.168.50.96"} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.459066 2255187 main.go:141] libmachine: (embed-certs-235462) Reserved static IP address: 192.168.50.96
	I0911 12:07:30.459088 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | skip adding static IP to network mk-embed-certs-235462 - found existing host DHCP lease matching {name: "embed-certs-235462", mac: "52:54:00:2b:a0:6e", ip: "192.168.50.96"}
	I0911 12:07:30.459104 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Getting to WaitForSSH function...
	I0911 12:07:30.459117 2255187 main.go:141] libmachine: (embed-certs-235462) Waiting for SSH to be available...
	I0911 12:07:30.461594 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.461938 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.461979 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.462087 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Using SSH client type: external
	I0911 12:07:30.462109 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa (-rw-------)
	I0911 12:07:30.462146 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:07:30.462165 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | About to run SSH command:
	I0911 12:07:30.462200 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | exit 0
	I0911 12:07:30.556976 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | SSH cmd err, output: <nil>: 
	I0911 12:07:30.557370 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetConfigRaw
	I0911 12:07:30.558054 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:30.560898 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.561254 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.561292 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.561638 2255187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/config.json ...
	I0911 12:07:30.561863 2255187 machine.go:88] provisioning docker machine ...
	I0911 12:07:30.561885 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:30.562128 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.562296 2255187 buildroot.go:166] provisioning hostname "embed-certs-235462"
	I0911 12:07:30.562315 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.562497 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.565095 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.565484 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.565519 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.565682 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:30.565852 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.566021 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.566126 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:30.566273 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:30.566796 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:30.566814 2255187 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-235462 && echo "embed-certs-235462" | sudo tee /etc/hostname
	I0911 12:07:30.706262 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-235462
	
	I0911 12:07:30.706294 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.709499 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.709822 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.709862 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.710067 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:30.710331 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.710598 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.710762 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:30.710986 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:30.711479 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:30.711503 2255187 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-235462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-235462/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-235462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:07:30.850084 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:30.850120 2255187 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:07:30.850141 2255187 buildroot.go:174] setting up certificates
	I0911 12:07:30.850155 2255187 provision.go:83] configureAuth start
	I0911 12:07:30.850168 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.850494 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:30.853326 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.853650 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.853680 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.853864 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.856233 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.856574 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.856639 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.856755 2255187 provision.go:138] copyHostCerts
	I0911 12:07:30.856844 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:07:30.856859 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:07:30.856933 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:07:30.857039 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:07:30.857050 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:07:30.857078 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:07:30.857143 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:07:30.857150 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:07:30.857170 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:07:30.857217 2255187 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.embed-certs-235462 san=[192.168.50.96 192.168.50.96 localhost 127.0.0.1 minikube embed-certs-235462]
	I0911 12:07:30.996533 2255187 provision.go:172] copyRemoteCerts
	I0911 12:07:30.996607 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:07:30.996643 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.999950 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.000333 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.000370 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.000514 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.000787 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.000978 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.001133 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.095524 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 12:07:31.121456 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:07:31.145813 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0911 12:07:31.171621 2255187 provision.go:86] duration metric: configureAuth took 321.448095ms
	I0911 12:07:31.171657 2255187 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:07:31.171880 2255187 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:07:31.171975 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.175276 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.175783 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.175819 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.176082 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.176356 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.176535 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.176724 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.177014 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:31.177500 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:31.177521 2255187 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:07:31.514064 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:07:31.514090 2255187 machine.go:91] provisioned docker machine in 952.213137ms
	I0911 12:07:31.514101 2255187 start.go:300] post-start starting for "embed-certs-235462" (driver="kvm2")
	I0911 12:07:31.514135 2255187 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:07:31.514188 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.514651 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:07:31.514705 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.517108 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.517563 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.517599 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.517819 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.518053 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.518243 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.518426 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.612293 2255187 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:07:31.616991 2255187 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:07:31.617022 2255187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:07:31.617143 2255187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:07:31.617263 2255187 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:07:31.617393 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:07:31.627725 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:31.652196 2255187 start.go:303] post-start completed in 138.067305ms
	I0911 12:07:31.652232 2255187 fix.go:56] fixHost completed within 20.705348144s
	I0911 12:07:31.652264 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.655234 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.655598 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.655633 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.655810 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.656000 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.656236 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.656373 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.656547 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:31.657061 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:31.657078 2255187 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:07:31.785981 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434051.730508911
	
	I0911 12:07:31.786019 2255187 fix.go:206] guest clock: 1694434051.730508911
	I0911 12:07:31.786029 2255187 fix.go:219] Guest: 2023-09-11 12:07:31.730508911 +0000 UTC Remote: 2023-09-11 12:07:31.65223725 +0000 UTC m=+289.079171252 (delta=78.271661ms)
	I0911 12:07:31.786076 2255187 fix.go:190] guest clock delta is within tolerance: 78.271661ms
	I0911 12:07:31.786082 2255187 start.go:83] releasing machines lock for "embed-certs-235462", held for 20.839248295s
	I0911 12:07:31.786115 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.786440 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:31.789427 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.789809 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.789844 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.790024 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.790717 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.790954 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.791062 2255187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:07:31.791130 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.791177 2255187 ssh_runner.go:195] Run: cat /version.json
	I0911 12:07:31.791208 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.793991 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794359 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.794393 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794414 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794669 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.794879 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.794871 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.794913 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.795104 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.795112 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.795289 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.795291 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.795468 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.795585 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.910483 2255187 ssh_runner.go:195] Run: systemctl --version
	I0911 12:07:31.916739 2255187 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:07:32.059621 2255187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:07:32.066857 2255187 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:07:32.066955 2255187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:07:32.084365 2255187 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:07:32.084401 2255187 start.go:466] detecting cgroup driver to use...
	I0911 12:07:32.084518 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:07:32.098782 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:07:32.111344 2255187 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:07:32.111421 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:07:32.124323 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:07:32.137910 2255187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:07:32.244478 2255187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:07:32.374160 2255187 docker.go:212] disabling docker service ...
	I0911 12:07:32.374262 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:07:32.387762 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:07:32.401120 2255187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:07:32.522150 2255187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:07:31.815672 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Start
	I0911 12:07:31.815900 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring networks are active...
	I0911 12:07:31.816771 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring network default is active
	I0911 12:07:31.817161 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring network mk-old-k8s-version-642215 is active
	I0911 12:07:31.817559 2255304 main.go:141] libmachine: (old-k8s-version-642215) Getting domain xml...
	I0911 12:07:31.818275 2255304 main.go:141] libmachine: (old-k8s-version-642215) Creating domain...
	I0911 12:07:32.639647 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:07:32.658106 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:07:32.677573 2255187 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:07:32.677658 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.687407 2255187 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:07:32.687499 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.697706 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.707515 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.718090 2255187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:07:32.728668 2255187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:07:32.737652 2255187 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:07:32.737732 2255187 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:07:32.751510 2255187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:07:32.760774 2255187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:07:32.881718 2255187 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:07:33.064736 2255187 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:07:33.064859 2255187 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:07:33.071112 2255187 start.go:534] Will wait 60s for crictl version
	I0911 12:07:33.071195 2255187 ssh_runner.go:195] Run: which crictl
	I0911 12:07:33.075202 2255187 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:07:33.111795 2255187 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:07:33.111898 2255187 ssh_runner.go:195] Run: crio --version
	I0911 12:07:33.162455 2255187 ssh_runner.go:195] Run: crio --version
	I0911 12:07:33.224538 2255187 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:07:33.226156 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:33.229640 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:33.230164 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:33.230202 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:33.230434 2255187 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0911 12:07:33.235232 2255187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:33.248016 2255187 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:07:33.248094 2255187 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:33.290506 2255187 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:07:33.290594 2255187 ssh_runner.go:195] Run: which lz4
	I0911 12:07:33.294802 2255187 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:07:33.299115 2255187 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:07:33.299169 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:07:35.241115 2255187 crio.go:444] Took 1.946355 seconds to copy over tarball
	I0911 12:07:35.241211 2255187 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:07:33.131519 2255304 main.go:141] libmachine: (old-k8s-version-642215) Waiting to get IP...
	I0911 12:07:33.132551 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.133144 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.133255 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.133123 2256281 retry.go:31] will retry after 206.885556ms: waiting for machine to come up
	I0911 12:07:33.341966 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.342472 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.342495 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.342420 2256281 retry.go:31] will retry after 235.74047ms: waiting for machine to come up
	I0911 12:07:33.580161 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.580683 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.580720 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.580644 2256281 retry.go:31] will retry after 407.752379ms: waiting for machine to come up
	I0911 12:07:33.990505 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.991033 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.991099 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.991019 2256281 retry.go:31] will retry after 579.085044ms: waiting for machine to come up
	I0911 12:07:34.571958 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:34.572419 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:34.572451 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:34.572371 2256281 retry.go:31] will retry after 584.464544ms: waiting for machine to come up
	I0911 12:07:35.158152 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:35.158644 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:35.158677 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:35.158579 2256281 retry.go:31] will retry after 750.2868ms: waiting for machine to come up
	I0911 12:07:35.910364 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:35.910949 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:35.910983 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:35.910887 2256281 retry.go:31] will retry after 981.989906ms: waiting for machine to come up
	I0911 12:07:36.894691 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:36.895196 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:36.895233 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:36.895151 2256281 retry.go:31] will retry after 1.082443232s: waiting for machine to come up
	I0911 12:07:37.979265 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:37.979773 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:37.979802 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:37.979699 2256281 retry.go:31] will retry after 1.429811083s: waiting for machine to come up
	I0911 12:07:38.272328 2255187 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.031081597s)
	I0911 12:07:38.272378 2255187 crio.go:451] Took 3.031222 seconds to extract the tarball
	I0911 12:07:38.272392 2255187 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:07:38.314797 2255187 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:38.363925 2255187 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:07:38.363956 2255187 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:07:38.364034 2255187 ssh_runner.go:195] Run: crio config
	I0911 12:07:38.433884 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:07:38.433915 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:07:38.433941 2255187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:07:38.433969 2255187 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.96 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-235462 NodeName:embed-certs-235462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:07:38.434156 2255187 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-235462"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:07:38.434250 2255187 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-235462 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-235462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:07:38.434339 2255187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:07:38.447171 2255187 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:07:38.447273 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:07:38.459426 2255187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0911 12:07:38.478081 2255187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:07:38.495571 2255187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0911 12:07:38.514602 2255187 ssh_runner.go:195] Run: grep 192.168.50.96	control-plane.minikube.internal$ /etc/hosts
	I0911 12:07:38.518616 2255187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:38.531178 2255187 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462 for IP: 192.168.50.96
	I0911 12:07:38.531246 2255187 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:07:38.531410 2255187 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:07:38.531471 2255187 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:07:38.531565 2255187 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/client.key
	I0911 12:07:38.531650 2255187 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.key.8e4e34e1
	I0911 12:07:38.531705 2255187 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.key
	I0911 12:07:38.531860 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:07:38.531918 2255187 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:07:38.531933 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:07:38.531976 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:07:38.532020 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:07:38.532071 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:07:38.532140 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:38.532870 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:07:38.558426 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0911 12:07:38.582526 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:07:38.606798 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:07:38.630691 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:07:38.655580 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:07:38.682355 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:07:38.707701 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:07:38.732346 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:07:38.757688 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:07:38.783458 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:07:38.808481 2255187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:07:38.825822 2255187 ssh_runner.go:195] Run: openssl version
	I0911 12:07:38.831897 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:07:38.842170 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.847385 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.847467 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.853456 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:07:38.864049 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:07:38.874236 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.879391 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.879463 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.885352 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:07:38.895225 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:07:38.905599 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.910660 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.910748 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.916920 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:07:38.927096 2255187 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:07:38.932313 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:07:38.939081 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:07:38.946028 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:07:38.952644 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:07:38.959391 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:07:38.965871 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:07:38.972698 2255187 kubeadm.go:404] StartCluster: {Name:embed-certs-235462 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-235462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:07:38.972838 2255187 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:07:38.972906 2255187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:07:39.006683 2255187 cri.go:89] found id: ""
	I0911 12:07:39.006780 2255187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:07:39.017143 2255187 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:07:39.017173 2255187 kubeadm.go:636] restartCluster start
	I0911 12:07:39.017256 2255187 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:07:39.029483 2255187 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.031111 2255187 kubeconfig.go:92] found "embed-certs-235462" server: "https://192.168.50.96:8443"
	I0911 12:07:39.034708 2255187 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:07:39.046851 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.046919 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.058732 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.058756 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.058816 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.070011 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.570811 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.570945 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.583538 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:40.071137 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:40.071264 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:40.083997 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:40.570532 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:40.570646 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:40.583202 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:41.070241 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:41.070369 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:41.082992 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:41.570284 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:41.570420 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:41.582669 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:42.070231 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:42.070341 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:42.086964 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:42.570487 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:42.570592 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:42.582618 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.411715 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:39.412168 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:39.412203 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:39.412129 2256281 retry.go:31] will retry after 2.048771803s: waiting for machine to come up
	I0911 12:07:41.463672 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:41.464124 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:41.464160 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:41.464061 2256281 retry.go:31] will retry after 2.459765131s: waiting for machine to come up
	I0911 12:07:43.071070 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:43.071249 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:43.087309 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:43.570993 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:43.571105 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:43.586884 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:44.070402 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:44.070525 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:44.082541 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:44.571170 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:44.571303 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:44.583295 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:45.070902 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:45.071002 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:45.087666 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:45.570274 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:45.570400 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:45.587352 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:46.070596 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:46.070729 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:46.082939 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:46.570445 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:46.570559 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:46.582782 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:47.070351 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:47.070485 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:47.082518 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:47.571060 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:47.571155 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:47.583891 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:43.926561 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:43.926941 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:43.926983 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:43.926918 2256281 retry.go:31] will retry after 2.467825155s: waiting for machine to come up
	I0911 12:07:46.396258 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:46.396703 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:46.396736 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:46.396622 2256281 retry.go:31] will retry after 3.885293775s: waiting for machine to come up
	I0911 12:07:48.070904 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:48.070994 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:48.083706 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:48.570268 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:48.570404 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:48.582255 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:49.047880 2255187 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:07:49.047929 2255187 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:07:49.047951 2255187 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:07:49.048052 2255187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:07:49.081907 2255187 cri.go:89] found id: ""
	I0911 12:07:49.082024 2255187 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:07:49.099563 2255187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:07:49.109373 2255187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:07:49.109450 2255187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:07:49.119162 2255187 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:07:49.119210 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:49.251091 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:49.995928 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.192421 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.288496 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.365849 2255187 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:07:50.365943 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.383262 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.901757 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:51.401967 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:51.901613 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:52.402067 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.285991 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:50.286515 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:50.286547 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:50.286433 2256281 retry.go:31] will retry after 3.948880306s: waiting for machine to come up
	I0911 12:07:55.614569 2255814 start.go:369] acquired machines lock for "default-k8s-diff-port-484027" in 2m57.464444695s
	I0911 12:07:55.614642 2255814 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:55.614662 2255814 fix.go:54] fixHost starting: 
	I0911 12:07:55.615164 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:55.615208 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:55.635996 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I0911 12:07:55.636556 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:55.637268 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:07:55.637295 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:55.637758 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:55.638000 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:07:55.638191 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:07:55.640059 2255814 fix.go:102] recreateIfNeeded on default-k8s-diff-port-484027: state=Stopped err=<nil>
	I0911 12:07:55.640086 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	W0911 12:07:55.640254 2255814 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:55.643100 2255814 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-484027" ...
	I0911 12:07:54.236661 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.237200 2255304 main.go:141] libmachine: (old-k8s-version-642215) Found IP for machine: 192.168.61.58
	I0911 12:07:54.237226 2255304 main.go:141] libmachine: (old-k8s-version-642215) Reserving static IP address...
	I0911 12:07:54.237241 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has current primary IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.237676 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "old-k8s-version-642215", mac: "52:54:00:4e:60:8b", ip: "192.168.61.58"} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.237717 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | skip adding static IP to network mk-old-k8s-version-642215 - found existing host DHCP lease matching {name: "old-k8s-version-642215", mac: "52:54:00:4e:60:8b", ip: "192.168.61.58"}
	I0911 12:07:54.237736 2255304 main.go:141] libmachine: (old-k8s-version-642215) Reserved static IP address: 192.168.61.58
	I0911 12:07:54.237756 2255304 main.go:141] libmachine: (old-k8s-version-642215) Waiting for SSH to be available...
	I0911 12:07:54.237773 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Getting to WaitForSSH function...
	I0911 12:07:54.240007 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.240469 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.240521 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.240610 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Using SSH client type: external
	I0911 12:07:54.240642 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa (-rw-------)
	I0911 12:07:54.240679 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:07:54.240700 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | About to run SSH command:
	I0911 12:07:54.240715 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | exit 0
	I0911 12:07:54.337416 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | SSH cmd err, output: <nil>: 
	I0911 12:07:54.337857 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetConfigRaw
	I0911 12:07:54.338666 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:54.341640 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.341973 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.342025 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.342296 2255304 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/config.json ...
	I0911 12:07:54.342549 2255304 machine.go:88] provisioning docker machine ...
	I0911 12:07:54.342573 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:54.342809 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.342965 2255304 buildroot.go:166] provisioning hostname "old-k8s-version-642215"
	I0911 12:07:54.342986 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.343133 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.345466 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.345848 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.345881 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.346024 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.346214 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.346366 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.346491 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.346713 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.347165 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.347184 2255304 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642215 && echo "old-k8s-version-642215" | sudo tee /etc/hostname
	I0911 12:07:54.487005 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642215
	
	I0911 12:07:54.487058 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.489843 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.490146 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.490175 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.490378 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.490603 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.490774 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.490931 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.491146 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.491586 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.491612 2255304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642215/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:07:54.631441 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:54.631474 2255304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:07:54.631500 2255304 buildroot.go:174] setting up certificates
	I0911 12:07:54.631513 2255304 provision.go:83] configureAuth start
	I0911 12:07:54.631525 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.631988 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:54.634992 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.635411 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.635448 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.635700 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.638219 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.638608 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.638646 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.638788 2255304 provision.go:138] copyHostCerts
	I0911 12:07:54.638870 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:07:54.638881 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:07:54.638957 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:07:54.639087 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:07:54.639099 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:07:54.639128 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:07:54.639278 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:07:54.639293 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:07:54.639322 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:07:54.639405 2255304 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642215 san=[192.168.61.58 192.168.61.58 localhost 127.0.0.1 minikube old-k8s-version-642215]
	I0911 12:07:54.792963 2255304 provision.go:172] copyRemoteCerts
	I0911 12:07:54.793027 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:07:54.793056 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.796196 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.796555 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.796592 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.796884 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.797124 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.797410 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.797620 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:54.895690 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0911 12:07:54.923392 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 12:07:54.951276 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:07:54.979345 2255304 provision.go:86] duration metric: configureAuth took 347.814948ms
	I0911 12:07:54.979383 2255304 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:07:54.979690 2255304 config.go:182] Loaded profile config "old-k8s-version-642215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0911 12:07:54.979805 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.982955 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.983405 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.983448 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.983618 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.983822 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.984020 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.984190 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.984377 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.984924 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.984948 2255304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:07:55.330958 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:07:55.330995 2255304 machine.go:91] provisioned docker machine in 988.429681ms
	I0911 12:07:55.331008 2255304 start.go:300] post-start starting for "old-k8s-version-642215" (driver="kvm2")
	I0911 12:07:55.331021 2255304 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:07:55.331049 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.331490 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:07:55.331536 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.334936 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.335425 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.335467 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.335645 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.335902 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.336075 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.336290 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.439126 2255304 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:07:55.445330 2255304 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:07:55.445370 2255304 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:07:55.445453 2255304 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:07:55.445564 2255304 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:07:55.445692 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:07:55.455235 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:55.480979 2255304 start.go:303] post-start completed in 149.950869ms
	I0911 12:07:55.481014 2255304 fix.go:56] fixHost completed within 23.694753941s
	I0911 12:07:55.481046 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.484222 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.484612 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.484647 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.484879 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.485159 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.485352 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.485527 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.485696 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:55.486109 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:55.486122 2255304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:07:55.614312 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434075.554093051
	
	I0911 12:07:55.614344 2255304 fix.go:206] guest clock: 1694434075.554093051
	I0911 12:07:55.614355 2255304 fix.go:219] Guest: 2023-09-11 12:07:55.554093051 +0000 UTC Remote: 2023-09-11 12:07:55.481020512 +0000 UTC m=+302.412352865 (delta=73.072539ms)
	I0911 12:07:55.614409 2255304 fix.go:190] guest clock delta is within tolerance: 73.072539ms
	I0911 12:07:55.614423 2255304 start.go:83] releasing machines lock for "old-k8s-version-642215", held for 23.828210342s
	I0911 12:07:55.614465 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.614816 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:55.617993 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.618444 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.618489 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.618674 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619275 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619473 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619611 2255304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:07:55.619674 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.619732 2255304 ssh_runner.go:195] Run: cat /version.json
	I0911 12:07:55.619767 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.622428 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.622846 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.622873 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.622894 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.623012 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.623191 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.623279 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.623302 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.623399 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.623465 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.623543 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.623615 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.623747 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.623891 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.742462 2255304 ssh_runner.go:195] Run: systemctl --version
	I0911 12:07:55.748982 2255304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:07:55.906639 2255304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:07:55.914088 2255304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:07:55.914183 2255304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:07:55.938200 2255304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:07:55.938240 2255304 start.go:466] detecting cgroup driver to use...
	I0911 12:07:55.938333 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:07:55.965549 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:07:55.986227 2255304 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:07:55.986308 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:07:56.003370 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:07:56.025702 2255304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:07:56.158835 2255304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:07:56.311687 2255304 docker.go:212] disabling docker service ...
	I0911 12:07:56.311770 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:07:56.337492 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:07:56.355858 2255304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:07:56.486823 2255304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:07:56.617414 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:07:56.634057 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:07:56.658242 2255304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0911 12:07:56.658370 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.670146 2255304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:07:56.670252 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.681790 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.695832 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.707434 2255304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:07:56.718631 2255304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:07:56.729355 2255304 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:07:56.729436 2255304 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:07:56.744591 2255304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:07:56.755374 2255304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:07:56.906693 2255304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:07:57.131296 2255304 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:07:57.131439 2255304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:07:57.137554 2255304 start.go:534] Will wait 60s for crictl version
	I0911 12:07:57.137645 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:07:57.141720 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:07:57.178003 2255304 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:07:57.178110 2255304 ssh_runner.go:195] Run: crio --version
	I0911 12:07:57.236871 2255304 ssh_runner.go:195] Run: crio --version
	I0911 12:07:57.303639 2255304 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0911 12:07:52.901170 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:53.401940 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:53.430776 2255187 api_server.go:72] duration metric: took 3.064926262s to wait for apiserver process to appear ...
	I0911 12:07:53.430809 2255187 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:07:53.430837 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:53.431478 2255187 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0911 12:07:53.431528 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:53.431982 2255187 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0911 12:07:53.932765 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.216903 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:07:56.216947 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:07:56.216964 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.322957 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:07:56.322994 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:07:56.432419 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.444961 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:07:56.445016 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:07:56.932209 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.942202 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:07:56.942242 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:07:57.432361 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:57.440671 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 200:
	ok
	I0911 12:07:57.453348 2255187 api_server.go:141] control plane version: v1.28.1
	I0911 12:07:57.453393 2255187 api_server.go:131] duration metric: took 4.0225758s to wait for apiserver health ...
	I0911 12:07:57.453408 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:07:57.453418 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:07:57.455939 2255187 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:07:57.457968 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:07:57.488156 2255187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:07:57.524742 2255187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:07:57.543532 2255187 system_pods.go:59] 8 kube-system pods found
	I0911 12:07:57.543601 2255187 system_pods.go:61] "coredns-5dd5756b68-pkzcf" [4a44c7ec-bb5b-40f0-8d44-d5b77666cb95] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:07:57.543616 2255187 system_pods.go:61] "etcd-embed-certs-235462" [c14f9910-0d1d-4494-9ebe-97173ab9abe9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:07:57.543671 2255187 system_pods.go:61] "kube-apiserver-embed-certs-235462" [4d95f49f-f9ad-40ce-9101-7e67ad978353] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:07:57.543686 2255187 system_pods.go:61] "kube-controller-manager-embed-certs-235462" [753eea69-23f4-46f8-b631-36cf0f34d663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:07:57.543701 2255187 system_pods.go:61] "kube-proxy-v24dz" [e527b198-cf8f-4ada-af22-7979b249efd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:07:57.543711 2255187 system_pods.go:61] "kube-scheduler-embed-certs-235462" [b092d336-c45d-4b2c-87a5-df253a5fddbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:07:57.543722 2255187 system_pods.go:61] "metrics-server-57f55c9bc5-ldjwn" [4761a51f-8912-4be4-aa1d-2574e10da791] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:07:57.543735 2255187 system_pods.go:61] "storage-provisioner" [810336ff-14a1-4b3d-a4ff-2569f3710bab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:07:57.543744 2255187 system_pods.go:74] duration metric: took 18.975758ms to wait for pod list to return data ...
	I0911 12:07:57.543770 2255187 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:07:57.550468 2255187 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:07:57.550512 2255187 node_conditions.go:123] node cpu capacity is 2
	I0911 12:07:57.550527 2255187 node_conditions.go:105] duration metric: took 6.741621ms to run NodePressure ...
	I0911 12:07:57.550552 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:55.644857 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Start
	I0911 12:07:55.645094 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring networks are active...
	I0911 12:07:55.646010 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring network default is active
	I0911 12:07:55.646393 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring network mk-default-k8s-diff-port-484027 is active
	I0911 12:07:55.646808 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Getting domain xml...
	I0911 12:07:55.647513 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Creating domain...
	I0911 12:07:57.083879 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting to get IP...
	I0911 12:07:57.084769 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.085290 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.085361 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.085279 2256448 retry.go:31] will retry after 226.596764ms: waiting for machine to come up
	I0911 12:07:57.313593 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.314083 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.314106 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.314029 2256448 retry.go:31] will retry after 315.605673ms: waiting for machine to come up
	I0911 12:07:57.631774 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.632292 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.632329 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.632179 2256448 retry.go:31] will retry after 400.211275ms: waiting for machine to come up
	I0911 12:07:58.034189 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.305610 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:57.309276 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:57.309677 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:57.309721 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:57.310066 2255304 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0911 12:07:57.316611 2255304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:57.335580 2255304 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0911 12:07:57.335689 2255304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:57.380592 2255304 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0911 12:07:57.380690 2255304 ssh_runner.go:195] Run: which lz4
	I0911 12:07:57.386023 2255304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:07:57.391807 2255304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:07:57.391861 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0911 12:07:58.002314 2255187 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:07:58.010948 2255187 kubeadm.go:787] kubelet initialised
	I0911 12:07:58.010981 2255187 kubeadm.go:788] duration metric: took 8.627903ms waiting for restarted kubelet to initialise ...
	I0911 12:07:58.010993 2255187 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:07:58.020253 2255187 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.027844 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.027876 2255187 pod_ready.go:81] duration metric: took 7.583678ms waiting for pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.027888 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.027900 2255187 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.050283 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "etcd-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.050321 2255187 pod_ready.go:81] duration metric: took 22.413628ms waiting for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.050352 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "etcd-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.050369 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.060314 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.060348 2255187 pod_ready.go:81] duration metric: took 9.962502ms waiting for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.060360 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.060371 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.069122 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.069152 2255187 pod_ready.go:81] duration metric: took 8.771982ms waiting for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.069164 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.069176 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v24dz" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:59.329758 2255187 pod_ready.go:92] pod "kube-proxy-v24dz" in "kube-system" namespace has status "Ready":"True"
	I0911 12:07:59.329789 2255187 pod_ready.go:81] duration metric: took 1.260592229s waiting for pod "kube-proxy-v24dz" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:59.329804 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:01.526483 2255187 pod_ready.go:102] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"False"
	I0911 12:07:58.034838 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.037141 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:58.034724 2256448 retry.go:31] will retry after 394.484585ms: waiting for machine to come up
	I0911 12:07:58.431365 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.431982 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.432004 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:58.431886 2256448 retry.go:31] will retry after 593.506569ms: waiting for machine to come up
	I0911 12:07:59.026841 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.027490 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.027518 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:59.027389 2256448 retry.go:31] will retry after 666.166785ms: waiting for machine to come up
	I0911 12:07:59.694652 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.695161 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.695191 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:59.695113 2256448 retry.go:31] will retry after 975.320046ms: waiting for machine to come up
	I0911 12:08:00.672258 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:00.672804 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:00.672851 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:00.672755 2256448 retry.go:31] will retry after 1.161656415s: waiting for machine to come up
	I0911 12:08:01.835653 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:01.836186 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:01.836223 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:01.836130 2256448 retry.go:31] will retry after 1.505608393s: waiting for machine to come up
	I0911 12:07:59.503695 2255304 crio.go:444] Took 2.117718 seconds to copy over tarball
	I0911 12:07:59.503800 2255304 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:08:02.939001 2255304 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.435164165s)
	I0911 12:08:02.939037 2255304 crio.go:451] Took 3.435307 seconds to extract the tarball
	I0911 12:08:02.939050 2255304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:08:02.984446 2255304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:03.037419 2255304 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0911 12:08:03.037452 2255304 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 12:08:03.037546 2255304 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.037582 2255304 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.037597 2255304 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.037628 2255304 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.037583 2255304 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.037607 2255304 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0911 12:08:03.037551 2255304 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.037549 2255304 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.039413 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.039639 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.039650 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.039650 2255304 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.039819 2255304 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.039854 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.040031 2255304 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.040241 2255304 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0911 12:08:03.815561 2255187 pod_ready.go:102] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:04.614171 2255187 pod_ready.go:92] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:04.614199 2255187 pod_ready.go:81] duration metric: took 5.28438743s waiting for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:04.614211 2255187 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:06.638688 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:03.343936 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:03.353931 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:03.353970 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:03.344315 2256448 retry.go:31] will retry after 1.414606279s: waiting for machine to come up
	I0911 12:08:04.761183 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:04.761667 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:04.761695 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:04.761607 2256448 retry.go:31] will retry after 1.846261641s: waiting for machine to come up
	I0911 12:08:06.609258 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:06.609917 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:06.609965 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:06.609851 2256448 retry.go:31] will retry after 2.938814697s: waiting for machine to come up
	I0911 12:08:03.225129 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.227566 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.231565 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.233817 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.239841 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0911 12:08:03.243250 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.247155 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.522779 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.711354 2255304 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0911 12:08:03.711381 2255304 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0911 12:08:03.711438 2255304 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0911 12:08:03.711473 2255304 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.711501 2255304 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0911 12:08:03.711514 2255304 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0911 12:08:03.711530 2255304 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0911 12:08:03.711602 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711641 2255304 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0911 12:08:03.711678 2255304 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.711735 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711536 2255304 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.711823 2255304 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0911 12:08:03.711854 2255304 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.711856 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711894 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711475 2255304 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.711934 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711541 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711474 2255304 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.712005 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.823116 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0911 12:08:03.823136 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.823232 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.823349 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.823374 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.823429 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.823499 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.957383 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0911 12:08:03.957459 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0911 12:08:03.957513 2255304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0911 12:08:03.957521 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0911 12:08:03.957564 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0911 12:08:03.957649 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0911 12:08:03.957707 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0911 12:08:03.957743 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0911 12:08:03.962841 2255304 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0911 12:08:03.962863 2255304 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0911 12:08:03.962905 2255304 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0911 12:08:05.018464 2255304 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.055478429s)
	I0911 12:08:05.018510 2255304 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0911 12:08:05.018571 2255304 cache_images.go:92] LoadImages completed in 1.981102195s
	W0911 12:08:05.018661 2255304 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0911 12:08:05.018747 2255304 ssh_runner.go:195] Run: crio config
	I0911 12:08:05.107550 2255304 cni.go:84] Creating CNI manager for ""
	I0911 12:08:05.107585 2255304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:05.107614 2255304 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:08:05.107641 2255304 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.58 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642215 NodeName:old-k8s-version-642215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0911 12:08:05.107908 2255304 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642215"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-642215
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.58:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:08:05.108027 2255304 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642215 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-642215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:08:05.108106 2255304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0911 12:08:05.120210 2255304 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:08:05.120311 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:08:05.129517 2255304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0911 12:08:05.151855 2255304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:08:05.169543 2255304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0911 12:08:05.190304 2255304 ssh_runner.go:195] Run: grep 192.168.61.58	control-plane.minikube.internal$ /etc/hosts
	I0911 12:08:05.196014 2255304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:05.211627 2255304 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215 for IP: 192.168.61.58
	I0911 12:08:05.211663 2255304 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:05.211876 2255304 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:08:05.211943 2255304 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:08:05.212043 2255304 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/client.key
	I0911 12:08:05.212130 2255304 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.key.7152e027
	I0911 12:08:05.212217 2255304 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.key
	I0911 12:08:05.212397 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:08:05.212451 2255304 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:08:05.212467 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:08:05.212500 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:08:05.212531 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:08:05.212568 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:08:05.212637 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:05.213373 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:08:05.242362 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 12:08:05.272949 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:08:05.299359 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:08:05.326203 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:08:05.354388 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:08:05.385150 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:08:05.415683 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:08:05.449119 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:08:05.476397 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:08:05.503652 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:08:05.531520 2255304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:08:05.550108 2255304 ssh_runner.go:195] Run: openssl version
	I0911 12:08:05.556982 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:08:05.569083 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.574490 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.574570 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.581479 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:08:05.596824 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:08:05.607900 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.613627 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.613711 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.620309 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:08:05.630995 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:08:05.645786 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.652682 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.652773 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.660784 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:08:05.675417 2255304 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:08:05.681969 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:08:05.690345 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:08:05.697454 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:08:05.706283 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:08:05.712913 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:08:05.719308 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:08:05.726307 2255304 kubeadm.go:404] StartCluster: {Name:old-k8s-version-642215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-642215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:08:05.726414 2255304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:08:05.726478 2255304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:05.765092 2255304 cri.go:89] found id: ""
	I0911 12:08:05.765172 2255304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:08:05.775654 2255304 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:08:05.775681 2255304 kubeadm.go:636] restartCluster start
	I0911 12:08:05.775749 2255304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:08:05.785235 2255304 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:05.786289 2255304 kubeconfig.go:92] found "old-k8s-version-642215" server: "https://192.168.61.58:8443"
	I0911 12:08:05.789768 2255304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:08:05.799009 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:05.799092 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:05.811208 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:05.811235 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:05.811301 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:05.822223 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:06.322909 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:06.323053 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:06.337866 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:06.823220 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:06.823328 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:06.839573 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:07.323145 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:07.323245 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:07.335054 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:07.822427 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:07.822536 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:07.834385 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.146768 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:11.637314 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:09.552075 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:09.552494 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:09.552520 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:09.552442 2256448 retry.go:31] will retry after 3.623277093s: waiting for machine to come up
	I0911 12:08:08.323215 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:08.323301 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:08.335501 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:08.822942 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:08.823061 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:08.840055 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.322586 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:09.322692 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:09.338101 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.822702 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:09.822845 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:09.835245 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:10.322666 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:10.322750 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:10.337101 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:10.822530 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:10.822662 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:10.838511 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:11.323206 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:11.323329 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:11.338239 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:11.822952 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:11.823044 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:11.838752 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:12.323296 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:12.323384 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:12.335174 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:12.822659 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:12.822775 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:12.834762 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:13.637784 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:16.138584 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:13.178553 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:13.179008 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:13.179041 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:13.178961 2256448 retry.go:31] will retry after 3.636806595s: waiting for machine to come up
	I0911 12:08:16.818087 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.818548 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has current primary IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.818583 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Found IP for machine: 192.168.39.230
	I0911 12:08:16.818600 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Reserving static IP address...
	I0911 12:08:16.819118 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-484027", mac: "52:54:00:b1:16:75", ip: "192.168.39.230"} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.819156 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Reserved static IP address: 192.168.39.230
	I0911 12:08:16.819182 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | skip adding static IP to network mk-default-k8s-diff-port-484027 - found existing host DHCP lease matching {name: "default-k8s-diff-port-484027", mac: "52:54:00:b1:16:75", ip: "192.168.39.230"}
	I0911 12:08:16.819204 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Getting to WaitForSSH function...
	I0911 12:08:16.819221 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for SSH to be available...
	I0911 12:08:16.821746 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.822235 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.822270 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.822454 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Using SSH client type: external
	I0911 12:08:16.822500 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa (-rw-------)
	I0911 12:08:16.822551 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:08:16.822576 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | About to run SSH command:
	I0911 12:08:16.822590 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | exit 0
	I0911 12:08:16.957464 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | SSH cmd err, output: <nil>: 
	I0911 12:08:16.957845 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetConfigRaw
	I0911 12:08:16.958573 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:16.961262 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.961726 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.961762 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.962073 2255814 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/config.json ...
	I0911 12:08:16.962281 2255814 machine.go:88] provisioning docker machine ...
	I0911 12:08:16.962301 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:16.962594 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:16.962777 2255814 buildroot.go:166] provisioning hostname "default-k8s-diff-port-484027"
	I0911 12:08:16.962799 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:16.962971 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:16.965571 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.966095 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.966134 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.966313 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:16.966531 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:16.966685 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:16.966837 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:16.967021 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:16.967739 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:16.967764 2255814 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-484027 && echo "default-k8s-diff-port-484027" | sudo tee /etc/hostname
	I0911 12:08:17.106967 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-484027
	
	I0911 12:08:17.107036 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.110243 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.110663 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.110737 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.110953 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.111197 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.111388 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.111526 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.111782 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:17.112200 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:17.112223 2255814 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-484027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-484027/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-484027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:08:17.238410 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:08:17.238450 2255814 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:08:17.238508 2255814 buildroot.go:174] setting up certificates
	I0911 12:08:17.238520 2255814 provision.go:83] configureAuth start
	I0911 12:08:17.238536 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:17.238938 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:17.241635 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.242044 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.242106 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.242209 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.244737 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.245093 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.245117 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.245295 2255814 provision.go:138] copyHostCerts
	I0911 12:08:17.245360 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:08:17.245375 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:08:17.245434 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:08:17.245530 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:08:17.245537 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:08:17.245557 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:08:17.245627 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:08:17.245634 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:08:17.245651 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:08:17.245708 2255814 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-484027 san=[192.168.39.230 192.168.39.230 localhost 127.0.0.1 minikube default-k8s-diff-port-484027]
	I0911 12:08:17.540142 2255814 provision.go:172] copyRemoteCerts
	I0911 12:08:17.540233 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:08:17.540270 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.543823 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.544237 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.544277 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.544485 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.544706 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.544916 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.545060 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:17.645425 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:08:17.675288 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0911 12:08:17.703043 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 12:08:17.732683 2255814 provision.go:86] duration metric: configureAuth took 494.12506ms
	I0911 12:08:17.732713 2255814 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:08:17.732955 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:17.733076 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.736740 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.737204 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.737244 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.737464 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.737707 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.737914 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.738084 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.738324 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:17.738749 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:17.738774 2255814 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:08:13.323070 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:13.323174 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:13.334828 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:13.822403 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:13.822490 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:13.834374 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:14.323004 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:14.323100 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:14.334774 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:14.822351 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:14.822465 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:14.834368 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:15.323045 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:15.323154 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:15.334863 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:15.799700 2255304 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:08:15.799736 2255304 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:08:15.799751 2255304 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:08:15.799821 2255304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:15.831051 2255304 cri.go:89] found id: ""
	I0911 12:08:15.831140 2255304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:08:15.847072 2255304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:08:15.856353 2255304 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:08:15.856425 2255304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:15.865711 2255304 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:15.865740 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:15.990047 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.312314 2255304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.322225408s)
	I0911 12:08:17.312354 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.521733 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.627343 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.723857 2255304 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:17.723964 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:17.742688 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:18.336038 2255048 start.go:369] acquired machines lock for "no-preload-352076" in 1m2.388468349s
	I0911 12:08:18.336100 2255048 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:08:18.336125 2255048 fix.go:54] fixHost starting: 
	I0911 12:08:18.336615 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:18.336663 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:18.355715 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0911 12:08:18.356243 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:18.356901 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:08:18.356931 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:18.357385 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:18.357585 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:18.357787 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:08:18.359541 2255048 fix.go:102] recreateIfNeeded on no-preload-352076: state=Stopped err=<nil>
	I0911 12:08:18.359571 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	W0911 12:08:18.359750 2255048 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:08:18.361628 2255048 out.go:177] * Restarting existing kvm2 VM for "no-preload-352076" ...
	I0911 12:08:18.363286 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Start
	I0911 12:08:18.363532 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring networks are active...
	I0911 12:08:18.364515 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring network default is active
	I0911 12:08:18.364894 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring network mk-no-preload-352076 is active
	I0911 12:08:18.365345 2255048 main.go:141] libmachine: (no-preload-352076) Getting domain xml...
	I0911 12:08:18.366191 2255048 main.go:141] libmachine: (no-preload-352076) Creating domain...
	I0911 12:08:18.078952 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:08:18.078979 2255814 machine.go:91] provisioned docker machine in 1.116684764s
	I0911 12:08:18.078991 2255814 start.go:300] post-start starting for "default-k8s-diff-port-484027" (driver="kvm2")
	I0911 12:08:18.079011 2255814 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:08:18.079057 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.079482 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:08:18.079520 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.082212 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.082641 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.082674 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.082810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.083043 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.083227 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.083403 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.170810 2255814 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:08:18.175342 2255814 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:08:18.175370 2255814 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:08:18.175457 2255814 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:08:18.175583 2255814 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:08:18.175722 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:08:18.184543 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:18.209487 2255814 start.go:303] post-start completed in 130.475291ms
	I0911 12:08:18.209516 2255814 fix.go:56] fixHost completed within 22.594854569s
	I0911 12:08:18.209540 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.212339 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.212779 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.212832 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.212967 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.213187 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.213366 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.213515 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.213680 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:18.214071 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:18.214083 2255814 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:08:18.335862 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434098.277311369
	
	I0911 12:08:18.335893 2255814 fix.go:206] guest clock: 1694434098.277311369
	I0911 12:08:18.335902 2255814 fix.go:219] Guest: 2023-09-11 12:08:18.277311369 +0000 UTC Remote: 2023-09-11 12:08:18.20951981 +0000 UTC m=+200.212950109 (delta=67.791559ms)
	I0911 12:08:18.335925 2255814 fix.go:190] guest clock delta is within tolerance: 67.791559ms
	I0911 12:08:18.335932 2255814 start.go:83] releasing machines lock for "default-k8s-diff-port-484027", held for 22.721324127s
	I0911 12:08:18.335977 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.336342 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:18.339935 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.340372 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.340411 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.340801 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341526 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341746 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341832 2255814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:08:18.341895 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.342153 2255814 ssh_runner.go:195] Run: cat /version.json
	I0911 12:08:18.342219 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.345331 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.345619 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.345716 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.345751 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.346068 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.346282 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.346367 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.346409 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.346443 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.346624 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.346803 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.346960 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.347119 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.347284 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.455877 2255814 ssh_runner.go:195] Run: systemctl --version
	I0911 12:08:18.463787 2255814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:08:18.620444 2255814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:08:18.628878 2255814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:08:18.628972 2255814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:08:18.652267 2255814 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:08:18.652301 2255814 start.go:466] detecting cgroup driver to use...
	I0911 12:08:18.652381 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:08:18.672306 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:08:18.690514 2255814 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:08:18.690594 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:08:18.709032 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:08:18.727521 2255814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:08:18.859864 2255814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:08:19.005708 2255814 docker.go:212] disabling docker service ...
	I0911 12:08:19.005809 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:08:19.026177 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:08:19.043931 2255814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:08:19.184060 2255814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:08:19.305184 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:08:19.326550 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:08:19.351313 2255814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:08:19.351400 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.366747 2255814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:08:19.366836 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.382272 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.395743 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.408786 2255814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:08:19.424229 2255814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:08:19.438367 2255814 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:08:19.438450 2255814 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:08:19.457417 2255814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:08:19.470001 2255814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:08:19.629977 2255814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:08:19.846900 2255814 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:08:19.846994 2255814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:08:19.854282 2255814 start.go:534] Will wait 60s for crictl version
	I0911 12:08:19.854378 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:08:19.859252 2255814 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:08:19.897263 2255814 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:08:19.897349 2255814 ssh_runner.go:195] Run: crio --version
	I0911 12:08:19.966155 2255814 ssh_runner.go:195] Run: crio --version
	I0911 12:08:20.024697 2255814 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:08:18.639188 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:20.649395 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:20.026156 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:20.029726 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:20.030249 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:20.030286 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:20.030572 2255814 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 12:08:20.035523 2255814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:20.053903 2255814 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:08:20.053997 2255814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:20.096570 2255814 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:08:20.096666 2255814 ssh_runner.go:195] Run: which lz4
	I0911 12:08:20.102350 2255814 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:08:20.107338 2255814 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:08:20.107385 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:08:22.215033 2255814 crio.go:444] Took 2.112735 seconds to copy over tarball
	I0911 12:08:22.215168 2255814 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:08:18.262191 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:18.762029 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:19.262094 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:19.316271 2255304 api_server.go:72] duration metric: took 1.592409696s to wait for apiserver process to appear ...
	I0911 12:08:19.316309 2255304 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:19.316329 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:19.892254 2255048 main.go:141] libmachine: (no-preload-352076) Waiting to get IP...
	I0911 12:08:19.893353 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:19.893857 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:19.893939 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:19.893867 2256639 retry.go:31] will retry after 256.490953ms: waiting for machine to come up
	I0911 12:08:20.152717 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.153686 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.153718 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.153662 2256639 retry.go:31] will retry after 308.528476ms: waiting for machine to come up
	I0911 12:08:20.464569 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.465179 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.465240 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.465150 2256639 retry.go:31] will retry after 329.79495ms: waiting for machine to come up
	I0911 12:08:20.797010 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.797581 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.797615 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.797512 2256639 retry.go:31] will retry after 388.108578ms: waiting for machine to come up
	I0911 12:08:21.187304 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:21.187980 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:21.188006 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:21.187878 2256639 retry.go:31] will retry after 547.488463ms: waiting for machine to come up
	I0911 12:08:21.736835 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:21.737425 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:21.737466 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:21.737352 2256639 retry.go:31] will retry after 669.118316ms: waiting for machine to come up
	I0911 12:08:22.407727 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:22.408435 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:22.408471 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:22.408353 2256639 retry.go:31] will retry after 986.70059ms: waiting for machine to come up
	I0911 12:08:23.139403 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:25.141299 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:27.493149 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:25.680145 2255814 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.464917771s)
	I0911 12:08:25.680187 2255814 crio.go:451] Took 3.465097 seconds to extract the tarball
	I0911 12:08:25.680201 2255814 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:08:25.721940 2255814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:25.770149 2255814 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:08:25.770189 2255814 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:08:25.770296 2255814 ssh_runner.go:195] Run: crio config
	I0911 12:08:25.844108 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:08:25.844142 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:25.844170 2255814 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:08:25.844197 2255814 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-484027 NodeName:default-k8s-diff-port-484027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:08:25.844471 2255814 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-484027"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:08:25.844584 2255814 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-484027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0911 12:08:25.844751 2255814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:08:25.855558 2255814 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:08:25.855658 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:08:25.865531 2255814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0911 12:08:25.890631 2255814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:08:25.914304 2255814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0911 12:08:25.938065 2255814 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0911 12:08:25.943138 2255814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:25.963689 2255814 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027 for IP: 192.168.39.230
	I0911 12:08:25.963744 2255814 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:25.963968 2255814 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:08:25.964026 2255814 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:08:25.964139 2255814 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.key
	I0911 12:08:25.964245 2255814 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.key.165d62e4
	I0911 12:08:25.964309 2255814 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.key
	I0911 12:08:25.964546 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:08:25.964599 2255814 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:08:25.964618 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:08:25.964655 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:08:25.964699 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:08:25.964731 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:08:25.964805 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:25.965758 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:08:26.001391 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 12:08:26.032345 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:08:26.065593 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:08:26.100792 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:08:26.135603 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:08:26.170029 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:08:26.203119 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:08:26.232040 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:08:26.262353 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:08:26.292733 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:08:26.326750 2255814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:08:26.346334 2255814 ssh_runner.go:195] Run: openssl version
	I0911 12:08:26.353175 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:08:26.365742 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.372007 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.372086 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.378954 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:08:26.390365 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:08:26.403147 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.410930 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.411048 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.419889 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:08:26.433366 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:08:26.445752 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.452481 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.452563 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.461097 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:08:26.477855 2255814 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:08:26.483947 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:08:26.492879 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:08:26.501391 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:08:26.510124 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:08:26.518732 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:08:26.527356 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:08:26.536063 2255814 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:08:26.536225 2255814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:08:26.536300 2255814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:26.575522 2255814 cri.go:89] found id: ""
	I0911 12:08:26.575617 2255814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:08:26.586011 2255814 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:08:26.586043 2255814 kubeadm.go:636] restartCluster start
	I0911 12:08:26.586114 2255814 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:08:26.596758 2255814 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:26.598534 2255814 kubeconfig.go:92] found "default-k8s-diff-port-484027" server: "https://192.168.39.230:8444"
	I0911 12:08:26.603031 2255814 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:08:26.617921 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:26.618066 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:26.632719 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:26.632739 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:26.632793 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:26.650036 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:27.150299 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:27.150397 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:27.165783 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:27.650311 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:27.650416 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:27.665184 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:24.317268 2255304 api_server.go:269] stopped: https://192.168.61.58:8443/healthz: Get "https://192.168.61.58:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0911 12:08:24.317328 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:26.742901 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:26.742942 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:27.243118 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:27.654196 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0911 12:08:27.654260 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0911 12:08:27.743438 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:27.767557 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0911 12:08:27.767607 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0911 12:08:28.243610 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:28.251858 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 200:
	ok
	I0911 12:08:28.262619 2255304 api_server.go:141] control plane version: v1.16.0
	I0911 12:08:28.262659 2255304 api_server.go:131] duration metric: took 8.946341912s to wait for apiserver health ...
	I0911 12:08:28.262670 2255304 cni.go:84] Creating CNI manager for ""
	I0911 12:08:28.262676 2255304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:28.264705 2255304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:08:23.396798 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:23.398997 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:23.399029 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:23.397251 2256639 retry.go:31] will retry after 1.384367074s: waiting for machine to come up
	I0911 12:08:24.783036 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:24.783547 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:24.783584 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:24.783489 2256639 retry.go:31] will retry after 1.172643107s: waiting for machine to come up
	I0911 12:08:25.958217 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:25.958989 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:25.959024 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:25.958929 2256639 retry.go:31] will retry after 2.243377044s: waiting for machine to come up
	I0911 12:08:28.205538 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:28.206196 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:28.206226 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:28.206137 2256639 retry.go:31] will retry after 1.83460511s: waiting for machine to come up
	I0911 12:08:28.266346 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:08:28.280404 2255304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:08:28.308228 2255304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:28.317951 2255304 system_pods.go:59] 7 kube-system pods found
	I0911 12:08:28.317994 2255304 system_pods.go:61] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:28.318002 2255304 system_pods.go:61] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:28.318010 2255304 system_pods.go:61] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:28.318024 2255304 system_pods.go:61] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Pending
	I0911 12:08:28.318030 2255304 system_pods.go:61] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:28.318035 2255304 system_pods.go:61] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:28.318039 2255304 system_pods.go:61] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:28.318045 2255304 system_pods.go:74] duration metric: took 9.788007ms to wait for pod list to return data ...
	I0911 12:08:28.318055 2255304 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:28.323536 2255304 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:28.323578 2255304 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:28.323593 2255304 node_conditions.go:105] duration metric: took 5.532859ms to run NodePressure ...
	I0911 12:08:28.323619 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:28.927871 2255304 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:08:28.938224 2255304 kubeadm.go:787] kubelet initialised
	I0911 12:08:28.938256 2255304 kubeadm.go:788] duration metric: took 10.348938ms waiting for restarted kubelet to initialise ...
	I0911 12:08:28.938267 2255304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:28.944405 2255304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.951735 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.951774 2255304 pod_ready.go:81] duration metric: took 7.334386ms waiting for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.951786 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.951800 2255304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.964451 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "etcd-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.964487 2255304 pod_ready.go:81] duration metric: took 12.678175ms waiting for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.964499 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "etcd-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.964510 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.971472 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.971503 2255304 pod_ready.go:81] duration metric: took 6.983445ms waiting for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.971514 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.971523 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.978657 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.978691 2255304 pod_ready.go:81] duration metric: took 7.156987ms waiting for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.978704 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.978728 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:29.334593 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-proxy-855lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.334652 2255304 pod_ready.go:81] duration metric: took 355.905465ms waiting for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:29.334670 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-proxy-855lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.334683 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:29.734221 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.734262 2255304 pod_ready.go:81] duration metric: took 399.567918ms waiting for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:29.734275 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.734287 2255304 pod_ready.go:38] duration metric: took 796.006553ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:29.734313 2255304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:08:29.749280 2255304 ops.go:34] apiserver oom_adj: -16
	I0911 12:08:29.749313 2255304 kubeadm.go:640] restartCluster took 23.973623788s
	I0911 12:08:29.749325 2255304 kubeadm.go:406] StartCluster complete in 24.023033441s
	I0911 12:08:29.749349 2255304 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:29.749453 2255304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:08:29.752216 2255304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:29.752582 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:08:29.752784 2255304 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:08:29.752912 2255304 config.go:182] Loaded profile config "old-k8s-version-642215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0911 12:08:29.752947 2255304 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-642215"
	I0911 12:08:29.752971 2255304 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-642215"
	I0911 12:08:29.752976 2255304 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-642215"
	I0911 12:08:29.753016 2255304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-642215"
	W0911 12:08:29.752979 2255304 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:08:29.753159 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.752984 2255304 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-642215"
	I0911 12:08:29.753232 2255304 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-642215"
	W0911 12:08:29.753281 2255304 addons.go:240] addon metrics-server should already be in state true
	I0911 12:08:29.753369 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.753517 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.753554 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.753599 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.753630 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.753954 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.754016 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.773524 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0911 12:08:29.773614 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0911 12:08:29.774181 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.774418 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.774950 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.774967 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.775141 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0911 12:08:29.775158 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.775176 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.775584 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.775585 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.775597 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.775756 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.776112 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.776144 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.776178 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.776197 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.776510 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.776970 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.776990 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.790443 2255304 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-642215" context rescaled to 1 replicas
	I0911 12:08:29.790502 2255304 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:08:29.793918 2255304 out.go:177] * Verifying Kubernetes components...
	I0911 12:08:29.796131 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:29.798116 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38829
	I0911 12:08:29.798581 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.799554 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.799580 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.800105 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.800439 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.802764 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.805061 2255304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:29.803246 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0911 12:08:29.807001 2255304 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:29.807025 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:08:29.807053 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.807866 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.807924 2255304 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-642215"
	W0911 12:08:29.807949 2255304 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:08:29.807985 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.808406 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.808442 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.809644 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.809667 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.817010 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.817046 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.817101 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.817131 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.817158 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.817555 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.817625 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.817868 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.818244 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:29.820240 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.822846 2255304 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:08:29.824505 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:08:29.824526 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:08:29.824554 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.827924 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.828359 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.828396 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.828684 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.828950 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.829099 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.829285 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:29.830900 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0911 12:08:29.831463 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.832028 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.832049 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.832646 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.833261 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.833313 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.868600 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0911 12:08:29.869171 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.869822 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.869842 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.870236 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.870416 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.872928 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.873212 2255304 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:29.873232 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:08:29.873255 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.876313 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.876963 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.876983 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.876999 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.877168 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.877331 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.877468 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:30.019745 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:30.061364 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:08:30.061393 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:08:30.080562 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:30.100494 2255304 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 12:08:30.100511 2255304 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-642215" to be "Ready" ...
	I0911 12:08:30.120618 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:08:30.120647 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:08:30.173391 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:30.173427 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:08:30.208772 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:30.757802 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.757841 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.757982 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758021 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758294 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.758334 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758344 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758353 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758366 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758377 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.758620 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758646 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758660 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758677 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758690 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758701 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758717 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758743 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758943 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758954 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.759016 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.759052 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.759062 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.859384 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.859426 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.859828 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.859853 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.859864 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.859874 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.860302 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.860336 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.860357 2255304 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-642215"
	I0911 12:08:30.862687 2255304 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0911 12:08:29.637791 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:31.639796 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:28.150174 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:28.150294 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:28.166331 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:28.650905 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:28.650996 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:28.664146 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:29.150646 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:29.150745 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:29.166569 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:29.651031 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:29.651129 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:29.664106 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.150429 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:30.150535 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:30.167297 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.650364 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:30.650458 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:30.664180 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:31.150419 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:31.150521 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:31.168242 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:31.650834 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:31.650922 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:31.664772 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:32.150232 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:32.150362 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:32.163224 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:32.650676 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:32.650773 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:32.667077 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.864433 2255304 addons.go:502] enable addons completed in 1.111642638s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0911 12:08:32.139191 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:30.042388 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:30.043026 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:30.043054 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:30.042967 2256639 retry.go:31] will retry after 3.282840664s: waiting for machine to come up
	I0911 12:08:33.327456 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:33.328007 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:33.328066 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:33.327941 2256639 retry.go:31] will retry after 4.185053881s: waiting for machine to come up
	I0911 12:08:33.639996 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:36.139377 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:33.150668 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:33.150797 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:33.163178 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:33.650733 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:33.650845 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:33.666475 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:34.150939 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:34.151037 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:34.163985 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:34.650139 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:34.650250 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:34.664850 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:35.150224 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:35.150351 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:35.169894 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:35.650946 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:35.651044 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:35.665438 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:36.151019 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:36.151134 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:36.164843 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:36.618412 2255814 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:08:36.618460 2255814 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:08:36.618482 2255814 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:08:36.618571 2255814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:36.657264 2255814 cri.go:89] found id: ""
	I0911 12:08:36.657366 2255814 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:08:36.676222 2255814 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:08:36.686832 2255814 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:08:36.686923 2255814 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:36.699618 2255814 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:36.699654 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:36.842821 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.471899 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.699214 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.784721 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.870994 2255814 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:37.871085 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:37.894561 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:34.638777 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:37.138575 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:37.515376 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:37.515955 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:37.515989 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:37.515896 2256639 retry.go:31] will retry after 3.472863196s: waiting for machine to come up
	I0911 12:08:38.138433 2255304 node_ready.go:49] node "old-k8s-version-642215" has status "Ready":"True"
	I0911 12:08:38.138464 2255304 node_ready.go:38] duration metric: took 8.037923512s waiting for node "old-k8s-version-642215" to be "Ready" ...
	I0911 12:08:38.138475 2255304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:38.143177 2255304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.664411 2255304 pod_ready.go:92] pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.664449 2255304 pod_ready.go:81] duration metric: took 521.244524ms waiting for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.664463 2255304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.670838 2255304 pod_ready.go:92] pod "etcd-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.670876 2255304 pod_ready.go:81] duration metric: took 6.404356ms waiting for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.670890 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.679254 2255304 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.679284 2255304 pod_ready.go:81] duration metric: took 8.385069ms waiting for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.679299 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.939484 2255304 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.939514 2255304 pod_ready.go:81] duration metric: took 260.206232ms waiting for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.939529 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.337858 2255304 pod_ready.go:92] pod "kube-proxy-855lt" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:39.337894 2255304 pod_ready.go:81] duration metric: took 398.358394ms waiting for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.337907 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.738437 2255304 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:39.738465 2255304 pod_ready.go:81] duration metric: took 400.549428ms waiting for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.738479 2255304 pod_ready.go:38] duration metric: took 1.599991385s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:39.738509 2255304 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:39.738569 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.760727 2255304 api_server.go:72] duration metric: took 9.970181642s to wait for apiserver process to appear ...
	I0911 12:08:39.760774 2255304 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:39.760797 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:39.768195 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 200:
	ok
	I0911 12:08:39.769416 2255304 api_server.go:141] control plane version: v1.16.0
	I0911 12:08:39.769442 2255304 api_server.go:131] duration metric: took 8.658497ms to wait for apiserver health ...
	I0911 12:08:39.769457 2255304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:39.940647 2255304 system_pods.go:59] 7 kube-system pods found
	I0911 12:08:39.940683 2255304 system_pods.go:61] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:39.940701 2255304 system_pods.go:61] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:39.940708 2255304 system_pods.go:61] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:39.940715 2255304 system_pods.go:61] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Running
	I0911 12:08:39.940722 2255304 system_pods.go:61] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:39.940729 2255304 system_pods.go:61] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:39.940736 2255304 system_pods.go:61] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:39.940747 2255304 system_pods.go:74] duration metric: took 171.283587ms to wait for pod list to return data ...
	I0911 12:08:39.940763 2255304 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:08:40.139718 2255304 default_sa.go:45] found service account: "default"
	I0911 12:08:40.139751 2255304 default_sa.go:55] duration metric: took 198.981243ms for default service account to be created ...
	I0911 12:08:40.139763 2255304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:08:40.340959 2255304 system_pods.go:86] 7 kube-system pods found
	I0911 12:08:40.340998 2255304 system_pods.go:89] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:40.341008 2255304 system_pods.go:89] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:40.341015 2255304 system_pods.go:89] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:40.341028 2255304 system_pods.go:89] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Running
	I0911 12:08:40.341035 2255304 system_pods.go:89] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:40.341042 2255304 system_pods.go:89] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:40.341051 2255304 system_pods.go:89] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:40.341061 2255304 system_pods.go:126] duration metric: took 201.290886ms to wait for k8s-apps to be running ...
	I0911 12:08:40.341073 2255304 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:08:40.341163 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:40.359994 2255304 system_svc.go:56] duration metric: took 18.903474ms WaitForService to wait for kubelet.
	I0911 12:08:40.360036 2255304 kubeadm.go:581] duration metric: took 10.569498287s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:08:40.360063 2255304 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:40.538713 2255304 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:40.538748 2255304 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:40.538762 2255304 node_conditions.go:105] duration metric: took 178.692637ms to run NodePressure ...
	I0911 12:08:40.538778 2255304 start.go:228] waiting for startup goroutines ...
	I0911 12:08:40.538785 2255304 start.go:233] waiting for cluster config update ...
	I0911 12:08:40.538798 2255304 start.go:242] writing updated cluster config ...
	I0911 12:08:40.539175 2255304 ssh_runner.go:195] Run: rm -f paused
	I0911 12:08:40.601745 2255304 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0911 12:08:40.604230 2255304 out.go:177] 
	W0911 12:08:40.606184 2255304 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0911 12:08:40.607933 2255304 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0911 12:08:40.609870 2255304 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-642215" cluster and "default" namespace by default
	I0911 12:08:38.638441 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:40.639280 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:38.411419 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:38.910721 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.410710 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.911432 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:40.411115 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:40.438764 2255814 api_server.go:72] duration metric: took 2.567766062s to wait for apiserver process to appear ...
	I0911 12:08:40.438803 2255814 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:40.438828 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.439582 2255814 api_server.go:269] stopped: https://192.168.39.230:8444/healthz: Get "https://192.168.39.230:8444/healthz": dial tcp 192.168.39.230:8444: connect: connection refused
	I0911 12:08:40.439644 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.440098 2255814 api_server.go:269] stopped: https://192.168.39.230:8444/healthz: Get "https://192.168.39.230:8444/healthz": dial tcp 192.168.39.230:8444: connect: connection refused
	I0911 12:08:40.940202 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.989968 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.990485 2255048 main.go:141] libmachine: (no-preload-352076) Found IP for machine: 192.168.72.157
	I0911 12:08:40.990519 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has current primary IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.990530 2255048 main.go:141] libmachine: (no-preload-352076) Reserving static IP address...
	I0911 12:08:40.990910 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "no-preload-352076", mac: "52:54:00:91:89:e0", ip: "192.168.72.157"} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:40.990942 2255048 main.go:141] libmachine: (no-preload-352076) Reserved static IP address: 192.168.72.157
	I0911 12:08:40.991004 2255048 main.go:141] libmachine: (no-preload-352076) Waiting for SSH to be available...
	I0911 12:08:40.991024 2255048 main.go:141] libmachine: (no-preload-352076) DBG | skip adding static IP to network mk-no-preload-352076 - found existing host DHCP lease matching {name: "no-preload-352076", mac: "52:54:00:91:89:e0", ip: "192.168.72.157"}
	I0911 12:08:40.991044 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Getting to WaitForSSH function...
	I0911 12:08:40.994061 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.994417 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:40.994478 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.994612 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Using SSH client type: external
	I0911 12:08:40.994653 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa (-rw-------)
	I0911 12:08:40.994693 2255048 main.go:141] libmachine: (no-preload-352076) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:08:40.994711 2255048 main.go:141] libmachine: (no-preload-352076) DBG | About to run SSH command:
	I0911 12:08:40.994725 2255048 main.go:141] libmachine: (no-preload-352076) DBG | exit 0
	I0911 12:08:41.093865 2255048 main.go:141] libmachine: (no-preload-352076) DBG | SSH cmd err, output: <nil>: 
	I0911 12:08:41.094360 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetConfigRaw
	I0911 12:08:41.095142 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:41.098534 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.098944 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.098985 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.099319 2255048 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/config.json ...
	I0911 12:08:41.099667 2255048 machine.go:88] provisioning docker machine ...
	I0911 12:08:41.099711 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:41.100079 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.100503 2255048 buildroot.go:166] provisioning hostname "no-preload-352076"
	I0911 12:08:41.100535 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.100868 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.104253 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.104920 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.105102 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.105420 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.105864 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.106201 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.106627 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.106937 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.107432 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.107447 2255048 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-352076 && echo "no-preload-352076" | sudo tee /etc/hostname
	I0911 12:08:41.249885 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-352076
	
	I0911 12:08:41.249919 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.253419 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.253854 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.253892 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.254125 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.254373 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.254576 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.254752 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.254945 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.255592 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.255624 2255048 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-352076' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-352076/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-352076' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:08:41.394308 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:08:41.394348 2255048 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:08:41.394378 2255048 buildroot.go:174] setting up certificates
	I0911 12:08:41.394388 2255048 provision.go:83] configureAuth start
	I0911 12:08:41.394401 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.394737 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:41.398042 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.398506 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.398540 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.398747 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.401368 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.401743 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.401797 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.401939 2255048 provision.go:138] copyHostCerts
	I0911 12:08:41.402020 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:08:41.402034 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:08:41.402102 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:08:41.402226 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:08:41.402238 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:08:41.402278 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:08:41.402374 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:08:41.402386 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:08:41.402413 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:08:41.402501 2255048 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.no-preload-352076 san=[192.168.72.157 192.168.72.157 localhost 127.0.0.1 minikube no-preload-352076]
	I0911 12:08:41.717751 2255048 provision.go:172] copyRemoteCerts
	I0911 12:08:41.717828 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:08:41.717865 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.721152 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.721457 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.721499 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.721720 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.721943 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.722111 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.722261 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:41.818932 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 12:08:41.846852 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:08:41.875977 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0911 12:08:41.905364 2255048 provision.go:86] duration metric: configureAuth took 510.946609ms
	I0911 12:08:41.905401 2255048 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:08:41.905662 2255048 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:41.905762 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.909182 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.909656 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.909725 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.909913 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.910149 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.910342 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.910487 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.910649 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.911134 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.911154 2255048 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:08:42.260214 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:08:42.260254 2255048 machine.go:91] provisioned docker machine in 1.16057097s
	I0911 12:08:42.260268 2255048 start.go:300] post-start starting for "no-preload-352076" (driver="kvm2")
	I0911 12:08:42.260283 2255048 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:08:42.260307 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.260700 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:08:42.260738 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.263782 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.264157 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.264197 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.264358 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.264595 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.264808 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.265010 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.356470 2255048 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:08:42.361886 2255048 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:08:42.361921 2255048 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:08:42.362004 2255048 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:08:42.362082 2255048 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:08:42.362196 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:08:42.372005 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:42.400800 2255048 start.go:303] post-start completed in 140.51468ms
	I0911 12:08:42.400850 2255048 fix.go:56] fixHost completed within 24.064734762s
	I0911 12:08:42.400882 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.404351 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.404799 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.404862 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.405055 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.405297 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.405484 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.405644 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.405859 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:42.406477 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:42.406505 2255048 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:08:42.529978 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434122.467205529
	
	I0911 12:08:42.530008 2255048 fix.go:206] guest clock: 1694434122.467205529
	I0911 12:08:42.530020 2255048 fix.go:219] Guest: 2023-09-11 12:08:42.467205529 +0000 UTC Remote: 2023-09-11 12:08:42.400855668 +0000 UTC m=+369.043734805 (delta=66.349861ms)
	I0911 12:08:42.530049 2255048 fix.go:190] guest clock delta is within tolerance: 66.349861ms
	I0911 12:08:42.530062 2255048 start.go:83] releasing machines lock for "no-preload-352076", held for 24.19398788s
	I0911 12:08:42.530094 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.530397 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:42.533425 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.533777 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.533809 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.534032 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534670 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534881 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534986 2255048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:08:42.535048 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.535193 2255048 ssh_runner.go:195] Run: cat /version.json
	I0911 12:08:42.535235 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.538009 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538210 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538356 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.538386 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538551 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.538630 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.538658 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538748 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.538862 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.538939 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.539033 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.539212 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.539226 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.539373 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.659315 2255048 ssh_runner.go:195] Run: systemctl --version
	I0911 12:08:42.666117 2255048 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:08:42.827592 2255048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:08:42.834283 2255048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:08:42.834379 2255048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:08:42.855077 2255048 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:08:42.855107 2255048 start.go:466] detecting cgroup driver to use...
	I0911 12:08:42.855187 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:08:42.871553 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:08:42.886253 2255048 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:08:42.886341 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:08:42.902211 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:08:42.917991 2255048 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:08:43.043679 2255048 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:08:43.182633 2255048 docker.go:212] disabling docker service ...
	I0911 12:08:43.182709 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:08:43.200269 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:08:43.216232 2255048 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:08:43.338376 2255048 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:08:43.460730 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:08:43.478083 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:08:43.499948 2255048 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:08:43.500018 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.513007 2255048 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:08:43.513098 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.526435 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.539976 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.553967 2255048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:08:43.568765 2255048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:08:43.580392 2255048 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:08:43.580481 2255048 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:08:43.599784 2255048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:08:43.612160 2255048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:08:43.725608 2255048 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:08:43.930261 2255048 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:08:43.930390 2255048 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:08:43.937749 2255048 start.go:534] Will wait 60s for crictl version
	I0911 12:08:43.937875 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:43.942818 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:08:43.986093 2255048 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:08:43.986210 2255048 ssh_runner.go:195] Run: crio --version
	I0911 12:08:44.042887 2255048 ssh_runner.go:195] Run: crio --version
	I0911 12:08:44.106673 2255048 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:08:45.592797 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:45.592855 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:45.592874 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:45.637810 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:45.637846 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:45.940997 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:45.947826 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:08:45.947867 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:08:46.440462 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:46.449732 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:08:46.449772 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:08:46.940777 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:46.946988 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 200:
	ok
	I0911 12:08:46.957787 2255814 api_server.go:141] control plane version: v1.28.1
	I0911 12:08:46.957832 2255814 api_server.go:131] duration metric: took 6.519019358s to wait for apiserver health ...
	I0911 12:08:46.957845 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:08:46.957854 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:46.960358 2255814 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:08:43.138628 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:45.640990 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:46.962120 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:08:46.987804 2255814 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:08:47.021845 2255814 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:47.042508 2255814 system_pods.go:59] 8 kube-system pods found
	I0911 12:08:47.042560 2255814 system_pods.go:61] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:08:47.042575 2255814 system_pods.go:61] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:08:47.042585 2255814 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:08:47.042600 2255814 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:08:47.042612 2255814 system_pods.go:61] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:08:47.042624 2255814 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:08:47.042641 2255814 system_pods.go:61] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:08:47.042652 2255814 system_pods.go:61] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:08:47.042663 2255814 system_pods.go:74] duration metric: took 20.787272ms to wait for pod list to return data ...
	I0911 12:08:47.042677 2255814 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:47.048412 2255814 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:47.048524 2255814 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:47.048547 2255814 node_conditions.go:105] duration metric: took 5.861231ms to run NodePressure ...
	I0911 12:08:47.048595 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:47.550933 2255814 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:08:47.556511 2255814 kubeadm.go:787] kubelet initialised
	I0911 12:08:47.556543 2255814 kubeadm.go:788] duration metric: took 5.579487ms waiting for restarted kubelet to initialise ...
	I0911 12:08:47.556554 2255814 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:47.563694 2255814 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.569943 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.569975 2255814 pod_ready.go:81] duration metric: took 6.244443ms waiting for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.569986 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.570001 2255814 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.576703 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.576777 2255814 pod_ready.go:81] duration metric: took 6.7656ms waiting for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.576791 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.576805 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.587740 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.587788 2255814 pod_ready.go:81] duration metric: took 10.95451ms waiting for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.587813 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.587833 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.596430 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.596468 2255814 pod_ready.go:81] duration metric: took 8.617854ms waiting for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.596481 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.596492 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.956009 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-proxy-ldgjr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.956047 2255814 pod_ready.go:81] duration metric: took 359.546333ms waiting for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.956060 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-proxy-ldgjr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.956078 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:44.108577 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:44.112208 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:44.112736 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:44.112782 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:44.113074 2255048 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0911 12:08:44.119517 2255048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:44.140305 2255048 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:08:44.140398 2255048 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:44.184487 2255048 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:08:44.184529 2255048 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 12:08:44.184600 2255048 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.184910 2255048 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.185117 2255048 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.185240 2255048 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.185366 2255048 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.185790 2255048 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.185987 2255048 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0911 12:08:44.186471 2255048 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.186856 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.186943 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.187105 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.187306 2255048 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.187523 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.187570 2255048 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0911 12:08:44.188031 2255048 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.188698 2255048 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.350727 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.351429 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.353625 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.356576 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.374129 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0911 12:08:44.385524 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.410764 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.472311 2255048 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0911 12:08:44.472382 2255048 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.472453 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.572121 2255048 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0911 12:08:44.572186 2255048 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.572258 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.589426 2255048 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0911 12:08:44.589558 2255048 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.589492 2255048 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0911 12:08:44.589638 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.589643 2255048 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.589692 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691568 2255048 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0911 12:08:44.691627 2255048 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.691657 2255048 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0911 12:08:44.691734 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.691767 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.691749 2255048 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.691816 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691705 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691943 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.691955 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.729362 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.778025 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0911 12:08:44.778152 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0911 12:08:44.778215 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:44.778280 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.799788 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0911 12:08:44.799952 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:08:44.799997 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.800112 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.800183 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0911 12:08:44.800283 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:44.851138 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0911 12:08:44.851174 2255048 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.851192 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0911 12:08:44.851227 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0911 12:08:44.851239 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.851141 2255048 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0911 12:08:44.851363 2255048 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.851430 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.896214 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0911 12:08:44.896261 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0911 12:08:44.896310 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0911 12:08:44.896376 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:44.896377 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:08:46.231671 2255048 ssh_runner.go:235] Completed: which crictl: (1.380174214s)
	I0911 12:08:46.231732 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1: (1.33531707s)
	I0911 12:08:46.231734 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.38044194s)
	I0911 12:08:46.231760 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0911 12:08:46.231767 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0911 12:08:46.231780 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:46.231781 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:46.231821 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:46.231777 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1: (1.335378451s)
	I0911 12:08:46.231904 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0911 12:08:48.356501 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.356547 2255814 pod_ready.go:81] duration metric: took 400.453753ms waiting for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:48.356563 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.356575 2255814 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:48.756718 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.756761 2255814 pod_ready.go:81] duration metric: took 400.17438ms waiting for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:48.756775 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.756786 2255814 pod_ready.go:38] duration metric: took 1.200219314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:48.756806 2255814 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:08:48.775561 2255814 ops.go:34] apiserver oom_adj: -16
	I0911 12:08:48.775587 2255814 kubeadm.go:640] restartCluster took 22.189536767s
	I0911 12:08:48.775598 2255814 kubeadm.go:406] StartCluster complete in 22.23955062s
	I0911 12:08:48.775621 2255814 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:48.775713 2255814 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:08:48.778091 2255814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:48.778397 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:08:48.778424 2255814 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:08:48.778566 2255814 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.778597 2255814 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-484027"
	W0911 12:08:48.778614 2255814 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:08:48.778648 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:48.778696 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.778718 2255814 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.778734 2255814 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-484027"
	I0911 12:08:48.779141 2255814 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.779145 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779159 2255814 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-484027"
	W0911 12:08:48.779167 2255814 addons.go:240] addon metrics-server should already be in state true
	I0911 12:08:48.779173 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.779207 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.779289 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779343 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.779509 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779556 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.786929 2255814 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-484027" context rescaled to 1 replicas
	I0911 12:08:48.786996 2255814 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:08:48.789204 2255814 out.go:177] * Verifying Kubernetes components...
	I0911 12:08:48.790973 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:48.799774 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I0911 12:08:48.800366 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.800462 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0911 12:08:48.801065 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.801286 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.801312 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.802064 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.802091 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.802105 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0911 12:08:48.802166 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.802495 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.802706 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.802842 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.802873 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.802873 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.803804 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.803827 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.804437 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.805108 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.805156 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.823113 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0911 12:08:48.823705 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.824347 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.824378 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.824848 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.825073 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.827337 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.827355 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43649
	I0911 12:08:48.830403 2255814 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:48.827726 2255814 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-484027"
	I0911 12:08:48.828116 2255814 main.go:141] libmachine: () Calling .GetVersion
	W0911 12:08:48.832240 2255814 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:08:48.832297 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.832351 2255814 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:48.832372 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:08:48.832404 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.832767 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.832846 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.833819 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.833843 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.834348 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.834583 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.836499 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.837953 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.838586 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.838616 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.838662 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.838863 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.839009 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.839383 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.848085 2255814 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:08:48.850041 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:08:48.850077 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:08:48.850117 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.853766 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.854286 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.854324 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.854695 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.855024 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.855222 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.855427 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.857253 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
	I0911 12:08:48.858013 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.858572 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.858593 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.858922 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.859424 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.859461 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.877066 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0911 12:08:48.877762 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.878430 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.878451 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.878986 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.879214 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.881452 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.881771 2255814 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:48.881790 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:08:48.881810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.885826 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.886380 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.886406 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.887000 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.887261 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.887456 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.887604 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.990643 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:49.087344 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:08:49.087379 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:08:49.088448 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:49.172284 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:08:49.172325 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:08:49.284010 2255814 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-484027" to be "Ready" ...
	I0911 12:08:49.284396 2255814 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 12:08:49.296054 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:49.296086 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:08:49.379706 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:51.018731 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.028036666s)
	I0911 12:08:51.018796 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.018810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.018733 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.930229373s)
	I0911 12:08:51.018900 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.018920 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.019201 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.019252 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.019291 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.019304 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.019315 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.019325 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.019420 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.019433 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.019445 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.019457 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.021142 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.021184 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.021199 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021204 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021223 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021223 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021238 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.021259 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.021542 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021615 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021683 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.122492 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.742646501s)
	I0911 12:08:51.122563 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.122582 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.123183 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.123183 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.123214 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.123224 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.123232 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.123668 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.123713 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.123729 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.123743 2255814 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-484027"
	I0911 12:08:51.126333 2255814 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0911 12:08:48.273682 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:50.640588 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:51.128042 2255814 addons.go:502] enable addons completed in 2.34962006s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0911 12:08:51.299348 2255814 node_ready.go:58] node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:49.857883 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (3.62602487s)
	I0911 12:08:49.857920 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0911 12:08:49.857935 2255048 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:49.858008 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:49.858007 2255048 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.626200516s)
	I0911 12:08:49.858128 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0911 12:08:49.858215 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:08:53.140732 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:55.639106 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:53.799851 2255814 node_ready.go:58] node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:56.661585 2255814 node_ready.go:49] node "default-k8s-diff-port-484027" has status "Ready":"True"
	I0911 12:08:56.661621 2255814 node_ready.go:38] duration metric: took 7.377564832s waiting for node "default-k8s-diff-port-484027" to be "Ready" ...
	I0911 12:08:56.661651 2255814 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:56.675600 2255814 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.686880 2255814 pod_ready.go:92] pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:56.686977 2255814 pod_ready.go:81] duration metric: took 11.34453ms waiting for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.687027 2255814 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.695897 2255814 pod_ready.go:92] pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:56.695991 2255814 pod_ready.go:81] duration metric: took 8.931143ms waiting for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.696011 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:57.305638 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (7.447392742s)
	I0911 12:08:57.305689 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0911 12:08:57.305809 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.447768556s)
	I0911 12:08:57.305836 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0911 12:08:57.305855 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:57.305932 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:58.142333 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:00.644281 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:58.721936 2255814 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.721964 2255814 pod_ready.go:81] duration metric: took 2.025944093s waiting for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.721978 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.728483 2255814 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.728509 2255814 pod_ready.go:81] duration metric: took 6.525188ms waiting for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.728522 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.868777 2255814 pod_ready.go:92] pod "kube-proxy-ldgjr" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.868821 2255814 pod_ready.go:81] duration metric: took 140.280926ms waiting for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.868839 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:59.266668 2255814 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:59.266699 2255814 pod_ready.go:81] duration metric: took 397.852252ms waiting for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:59.266710 2255814 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:01.578711 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:00.172738 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.866760661s)
	I0911 12:09:00.172779 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0911 12:09:00.172904 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:09:00.172989 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:09:01.745988 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.572965994s)
	I0911 12:09:01.746029 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0911 12:09:01.746047 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:09:01.746105 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:09:03.140327 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:05.141268 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:04.080056 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:06.578690 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:03.814358 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (2.068208039s)
	I0911 12:09:03.814432 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0911 12:09:03.814452 2255048 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:09:03.814516 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:09:04.982461 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.167909383s)
	I0911 12:09:04.982505 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0911 12:09:04.982542 2255048 cache_images.go:123] Successfully loaded all cached images
	I0911 12:09:04.982549 2255048 cache_images.go:92] LoadImages completed in 20.798002598s
	I0911 12:09:04.982647 2255048 ssh_runner.go:195] Run: crio config
	I0911 12:09:05.047992 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:09:05.048024 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:09:05.048049 2255048 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:09:05.048070 2255048 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.157 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-352076 NodeName:no-preload-352076 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:09:05.048268 2255048 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-352076"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.157
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.157"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:09:05.048352 2255048 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-352076 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-352076 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:09:05.048427 2255048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:09:05.060720 2255048 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:09:05.060881 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:09:05.072228 2255048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0911 12:09:05.093943 2255048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:09:05.113383 2255048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0911 12:09:05.136859 2255048 ssh_runner.go:195] Run: grep 192.168.72.157	control-plane.minikube.internal$ /etc/hosts
	I0911 12:09:05.143807 2255048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.157	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:09:05.160629 2255048 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076 for IP: 192.168.72.157
	I0911 12:09:05.160686 2255048 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:09:05.161057 2255048 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:09:05.161131 2255048 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:09:05.161253 2255048 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.key
	I0911 12:09:05.161367 2255048 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.key.66fc92c5
	I0911 12:09:05.161447 2255048 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.key
	I0911 12:09:05.161605 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:09:05.161646 2255048 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:09:05.161655 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:09:05.161696 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:09:05.161745 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:09:05.161773 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:09:05.161838 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:09:05.162864 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:09:05.196273 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 12:09:05.226310 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:09:05.259094 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:09:05.296329 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:09:05.329537 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:09:05.363893 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:09:05.398183 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:09:05.431986 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:09:05.462584 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:09:05.494047 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:09:05.531243 2255048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:09:05.554858 2255048 ssh_runner.go:195] Run: openssl version
	I0911 12:09:05.564158 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:09:05.578611 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.585480 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.585563 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.592835 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:09:05.606413 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:09:05.618978 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.626101 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.626179 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.634526 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:09:05.648394 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:09:05.664598 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.671632 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.671734 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.679143 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:09:05.691797 2255048 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:09:05.698734 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:09:05.706797 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:09:05.713927 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:09:05.721394 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:09:05.728652 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:09:05.736364 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:09:05.744505 2255048 kubeadm.go:404] StartCluster: {Name:no-preload-352076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-352076 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:09:05.744673 2255048 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:09:05.744751 2255048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:09:05.783568 2255048 cri.go:89] found id: ""
	I0911 12:09:05.783665 2255048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:09:05.794403 2255048 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:09:05.794443 2255048 kubeadm.go:636] restartCluster start
	I0911 12:09:05.794532 2255048 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:09:05.808458 2255048 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:05.809808 2255048 kubeconfig.go:92] found "no-preload-352076" server: "https://192.168.72.157:8443"
	I0911 12:09:05.812541 2255048 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:09:05.824406 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:05.824488 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:05.838004 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:05.838029 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:05.838081 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:05.850725 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:06.351553 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:06.351683 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:06.365583 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:06.851068 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:06.851203 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:06.865829 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.351654 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:07.351840 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:07.365462 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.851109 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:07.851227 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:07.865132 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:08.351854 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:08.351980 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:08.364980 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.637342 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:09.637899 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:11.638591 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:09.078188 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:11.575790 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:08.850933 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:08.851079 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:08.865313 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:09.350825 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:09.350918 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:09.363633 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:09.850908 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:09.851009 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:09.864051 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:10.351371 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:10.351459 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:10.364187 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:10.851868 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:10.851993 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:10.865706 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:11.351327 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:11.351445 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:11.364860 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:11.851490 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:11.851579 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:11.865090 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:12.351698 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:12.351841 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:12.365554 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:12.851082 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:12.851189 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:12.863359 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:13.351652 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:13.351762 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:13.364220 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:13.638913 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:16.138385 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:14.075701 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:16.083424 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:13.851558 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:13.851650 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:13.864548 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:14.351104 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:14.351196 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:14.363567 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:14.851181 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:14.851287 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:14.865371 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:15.351813 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:15.351921 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:15.364728 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:15.825491 2255048 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:09:15.825532 2255048 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:09:15.825549 2255048 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:09:15.825628 2255048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:09:15.863098 2255048 cri.go:89] found id: ""
	I0911 12:09:15.863207 2255048 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:09:15.881673 2255048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:09:15.892264 2255048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:09:15.892363 2255048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:09:15.903142 2255048 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:09:15.903168 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:16.075542 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.073042 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.305269 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.399770 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.484630 2255048 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:09:17.484713 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:17.502746 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:18.017919 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:18.139562 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:20.643130 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:18.578074 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:21.077490 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:18.517850 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:19.018007 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:19.518125 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:20.018229 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:20.062967 2255048 api_server.go:72] duration metric: took 2.578334133s to wait for apiserver process to appear ...
	I0911 12:09:20.062999 2255048 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:09:20.063024 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:20.063765 2255048 api_server.go:269] stopped: https://192.168.72.157:8443/healthz: Get "https://192.168.72.157:8443/healthz": dial tcp 192.168.72.157:8443: connect: connection refused
	I0911 12:09:20.063812 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:20.064348 2255048 api_server.go:269] stopped: https://192.168.72.157:8443/healthz: Get "https://192.168.72.157:8443/healthz": dial tcp 192.168.72.157:8443: connect: connection refused
	I0911 12:09:20.564847 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.276251 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:09:24.276297 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:09:24.276314 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.320049 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:09:24.320081 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:09:24.564444 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.570484 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:09:24.570524 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:09:25.064830 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:25.071229 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:09:25.071269 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:09:25.564901 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:25.570887 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
	ok
	I0911 12:09:25.580713 2255048 api_server.go:141] control plane version: v1.28.1
	I0911 12:09:25.580746 2255048 api_server.go:131] duration metric: took 5.517738896s to wait for apiserver health ...
	I0911 12:09:25.580759 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:09:25.580768 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:09:25.583425 2255048 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:09:23.139085 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.140565 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:23.077522 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.576471 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.585300 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:09:25.610742 2255048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:09:25.660757 2255048 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:09:25.680043 2255048 system_pods.go:59] 8 kube-system pods found
	I0911 12:09:25.680087 2255048 system_pods.go:61] "coredns-5dd5756b68-mghg7" [380c0d4e-d7e3-4434-9757-f4debc5206d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:09:25.680104 2255048 system_pods.go:61] "etcd-no-preload-352076" [4f74cb61-25fb-4478-afd4-3b0f0ef1bdae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:09:25.680115 2255048 system_pods.go:61] "kube-apiserver-no-preload-352076" [09ed0349-f0dc-4ab0-b057-230daeb8e7c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:09:25.680127 2255048 system_pods.go:61] "kube-controller-manager-no-preload-352076" [c93ec6ac-408b-4859-b45b-79bb3e3b53d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:09:25.680142 2255048 system_pods.go:61] "kube-proxy-f748l" [8379d15e-e886-48cb-8a53-3a48aef7c9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:09:25.680157 2255048 system_pods.go:61] "kube-scheduler-no-preload-352076" [7e7068d1-7f6b-4fe7-b1f4-73ddab4c7db4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:09:25.680174 2255048 system_pods.go:61] "metrics-server-57f55c9bc5-tvrkk" [7b463025-d2f8-4f1d-aa69-740cd828c1d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:09:25.680188 2255048 system_pods.go:61] "storage-provisioner" [52928c2e-1383-41b0-817c-203d016da7df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:09:25.680201 2255048 system_pods.go:74] duration metric: took 19.417405ms to wait for pod list to return data ...
	I0911 12:09:25.680220 2255048 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:09:25.685088 2255048 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:09:25.685127 2255048 node_conditions.go:123] node cpu capacity is 2
	I0911 12:09:25.685144 2255048 node_conditions.go:105] duration metric: took 4.914847ms to run NodePressure ...
	I0911 12:09:25.685170 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:26.127026 2255048 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:09:26.137211 2255048 kubeadm.go:787] kubelet initialised
	I0911 12:09:26.137247 2255048 kubeadm.go:788] duration metric: took 10.126758ms waiting for restarted kubelet to initialise ...
	I0911 12:09:26.137258 2255048 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:09:26.144732 2255048 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:28.168555 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:27.637951 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.142107 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:32.144784 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:28.078707 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.575535 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:32.575917 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.169198 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:31.168599 2255048 pod_ready.go:92] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:31.168623 2255048 pod_ready.go:81] duration metric: took 5.02386193s waiting for pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:31.168633 2255048 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:32.194954 2255048 pod_ready.go:92] pod "etcd-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:32.194986 2255048 pod_ready.go:81] duration metric: took 1.026346965s waiting for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:32.194997 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:33.218527 2255048 pod_ready.go:92] pod "kube-apiserver-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:33.218555 2255048 pod_ready.go:81] duration metric: took 1.02355184s waiting for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:33.218568 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:34.637330 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:36.638472 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:34.577030 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:37.076594 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:35.576857 2255048 pod_ready.go:102] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:38.072765 2255048 pod_ready.go:92] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.072791 2255048 pod_ready.go:81] duration metric: took 4.854217828s waiting for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.072807 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f748l" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.080177 2255048 pod_ready.go:92] pod "kube-proxy-f748l" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.080219 2255048 pod_ready.go:81] duration metric: took 7.386736ms waiting for pod "kube-proxy-f748l" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.080234 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.086910 2255048 pod_ready.go:92] pod "kube-scheduler-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.086935 2255048 pod_ready.go:81] duration metric: took 6.692353ms waiting for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.086947 2255048 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:39.139899 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:41.638556 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:39.076977 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:41.077356 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:40.275588 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:42.279343 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:44.140467 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.638950 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:43.575930 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.075946 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:44.773655 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.773783 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.639947 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:51.136953 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.076228 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:50.076280 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:52.575191 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.781871 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:51.276719 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:53.137841 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:55.639201 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:54.575724 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:56.577539 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:53.774303 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:55.775398 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:57.776172 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:58.137820 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:00.140032 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:59.075343 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:01.077352 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:00.274288 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:02.281024 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:02.637659 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:04.638359 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:07.138194 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:03.576039 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:05.581746 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:04.774609 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:06.777649 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:09.638158 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:12.138452 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:08.086089 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:10.577034 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:09.274229 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:11.773772 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:14.637905 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:17.137141 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:13.075497 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:15.075928 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:17.077025 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:13.777087 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:16.273244 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:18.274393 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:19.138225 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:21.638206 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:19.574944 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:21.577126 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:20.274987 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:22.774026 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:23.638427 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:25.639796 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:24.077660 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:26.576065 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:25.274996 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:27.773877 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:28.143807 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:30.639138 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:28.576550 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:31.076503 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:29.775191 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:32.275040 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:33.137429 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:35.137961 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:37.141067 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:33.575704 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:35.576704 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:34.773882 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:36.774534 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:39.637647 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:41.639902 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:38.076297 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:40.577008 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:38.774671 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:41.274312 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:43.274935 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:44.137187 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:46.141314 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:43.079758 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:45.589530 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:45.774930 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.273321 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.638868 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:51.139417 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.076212 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:50.078989 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:52.575259 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:50.274454 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:52.275086 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:53.637980 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:55.638403 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:54.575452 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:56.575714 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:54.777442 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:57.273658 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:58.136668 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:00.137799 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:59.077541 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:01.576462 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:59.275476 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:01.773680 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:02.636537 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:04.637865 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:07.136712 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:04.078863 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:06.577886 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:03.776995 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:06.274574 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:08.275266 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:09.137886 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:11.147508 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:09.075793 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:11.575828 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:10.275357 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:12.775241 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:13.638603 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:16.137986 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:14.076435 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:16.078427 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:15.275325 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:17.275446 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:18.138511 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:20.638477 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:18.575789 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:20.575987 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:22.576545 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:19.774865 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:22.280364 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:23.138801 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:25.638249 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:24.577693 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:26.581497 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:24.774606 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:27.274878 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:27.639126 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.640834 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:32.138497 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.079788 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:31.575364 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.774769 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:31.777925 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:34.636906 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:36.640855 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:33.576041 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:35.577513 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:34.275601 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:36.282120 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:39.138445 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:41.638724 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:38.074500 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:40.077237 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:42.078135 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:38.774882 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:40.776485 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:43.277653 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:43.639224 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:46.137265 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:44.574433 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:46.576378 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:45.776572 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.275210 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.137470 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:50.638249 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.580531 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:51.076018 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:50.775117 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:52.775535 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:52.641468 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.138561 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.138875 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:53.078788 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.079529 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.577003 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.274582 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.774611 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:59.637786 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:01.644407 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:00.075246 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:02.078022 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:00.274022 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:02.275711 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.137692 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.614957 2255187 pod_ready.go:81] duration metric: took 4m0.000726123s waiting for pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace to be "Ready" ...
	E0911 12:12:04.614999 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:12:04.615020 2255187 pod_ready.go:38] duration metric: took 4m6.604014313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:12:04.615056 2255187 kubeadm.go:640] restartCluster took 4m25.597873734s
	W0911 12:12:04.615156 2255187 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0911 12:12:04.615268 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0911 12:12:04.576764 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:06.579533 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.779450 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:07.276202 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:08.580439 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:11.075465 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:09.277634 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:11.776920 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:13.076473 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:15.077335 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:17.574470 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:14.276806 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:16.774759 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:19.576080 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:22.078686 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:18.775173 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:21.274723 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:23.276576 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:24.082590 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:26.584485 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:25.277284 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:27.774953 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:29.079400 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:31.575879 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:30.278194 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:32.773872 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:37.434471 2255187 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.819147659s)
	I0911 12:12:37.434634 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:12:37.450370 2255187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:12:37.463019 2255187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:12:37.473313 2255187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:12:37.473375 2255187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:12:33.578208 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:36.076227 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:34.775135 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:36.775239 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:37.703004 2255187 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:12:38.574884 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:40.577027 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:38.779298 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:41.274039 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:43.076990 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:45.077566 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:47.576057 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:43.775208 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:45.775382 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:48.274401 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:49.022486 2255187 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 12:12:49.022566 2255187 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 12:12:49.022667 2255187 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 12:12:49.022825 2255187 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 12:12:49.022994 2255187 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 12:12:49.023081 2255187 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 12:12:49.025047 2255187 out.go:204]   - Generating certificates and keys ...
	I0911 12:12:49.025151 2255187 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 12:12:49.025249 2255187 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 12:12:49.025340 2255187 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0911 12:12:49.025428 2255187 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0911 12:12:49.025521 2255187 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0911 12:12:49.025599 2255187 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0911 12:12:49.025703 2255187 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0911 12:12:49.025801 2255187 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0911 12:12:49.025898 2255187 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0911 12:12:49.026021 2255187 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0911 12:12:49.026083 2255187 kubeadm.go:322] [certs] Using the existing "sa" key
	I0911 12:12:49.026163 2255187 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 12:12:49.026252 2255187 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 12:12:49.026338 2255187 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 12:12:49.026436 2255187 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 12:12:49.026518 2255187 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 12:12:49.026609 2255187 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 12:12:49.026694 2255187 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 12:12:49.028378 2255187 out.go:204]   - Booting up control plane ...
	I0911 12:12:49.028469 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 12:12:49.028538 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 12:12:49.028632 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 12:12:49.028759 2255187 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 12:12:49.028894 2255187 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 12:12:49.028960 2255187 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 12:12:49.029126 2255187 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 12:12:49.029225 2255187 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504895 seconds
	I0911 12:12:49.029346 2255187 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:12:49.029485 2255187 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:12:49.029568 2255187 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:12:49.029801 2255187 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-235462 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:12:49.029864 2255187 kubeadm.go:322] [bootstrap-token] Using token: u1pjdn.ynd5x30gs2d5ngse
	I0911 12:12:49.031514 2255187 out.go:204]   - Configuring RBAC rules ...
	I0911 12:12:49.031635 2255187 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:12:49.031766 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:12:49.031961 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:12:49.032100 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:12:49.032234 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:12:49.032370 2255187 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:12:49.032513 2255187 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:12:49.032569 2255187 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:12:49.032641 2255187 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:12:49.032653 2255187 kubeadm.go:322] 
	I0911 12:12:49.032721 2255187 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:12:49.032733 2255187 kubeadm.go:322] 
	I0911 12:12:49.032850 2255187 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:12:49.032862 2255187 kubeadm.go:322] 
	I0911 12:12:49.032897 2255187 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:12:49.032954 2255187 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:12:49.033027 2255187 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:12:49.033034 2255187 kubeadm.go:322] 
	I0911 12:12:49.033113 2255187 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:12:49.033125 2255187 kubeadm.go:322] 
	I0911 12:12:49.033185 2255187 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:12:49.033194 2255187 kubeadm.go:322] 
	I0911 12:12:49.033272 2255187 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:12:49.033364 2255187 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:12:49.033478 2255187 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:12:49.033488 2255187 kubeadm.go:322] 
	I0911 12:12:49.033592 2255187 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:12:49.033674 2255187 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:12:49.033681 2255187 kubeadm.go:322] 
	I0911 12:12:49.033793 2255187 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token u1pjdn.ynd5x30gs2d5ngse \
	I0911 12:12:49.033940 2255187 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:12:49.033981 2255187 kubeadm.go:322] 	--control-plane 
	I0911 12:12:49.033994 2255187 kubeadm.go:322] 
	I0911 12:12:49.034117 2255187 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:12:49.034140 2255187 kubeadm.go:322] 
	I0911 12:12:49.034253 2255187 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token u1pjdn.ynd5x30gs2d5ngse \
	I0911 12:12:49.034398 2255187 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:12:49.034424 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:12:49.034438 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:12:49.036358 2255187 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:12:49.037952 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:12:49.078613 2255187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:12:49.171320 2255187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:12:49.171458 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.171492 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=embed-certs-235462 minikube.k8s.io/updated_at=2023_09_11T12_12_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.227806 2255187 ops.go:34] apiserver oom_adj: -16
	I0911 12:12:49.533909 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.637357 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:50.234909 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:50.734249 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:51.234928 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:51.734543 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:52.235022 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.576947 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.075970 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:50.275288 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.775973 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.734323 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:53.234558 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:53.734598 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.235197 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.734524 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:55.234539 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:55.734806 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:56.234833 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:56.734868 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:57.235336 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.574674 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:56.577723 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:54.777705 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:57.274282 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:57.735164 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:58.234340 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:58.734332 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:59.234884 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:59.734265 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:13:00.234310 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:13:00.376532 2255187 kubeadm.go:1081] duration metric: took 11.205145428s to wait for elevateKubeSystemPrivileges.
	I0911 12:13:00.376577 2255187 kubeadm.go:406] StartCluster complete in 5m21.403889838s
	I0911 12:13:00.376632 2255187 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:13:00.376754 2255187 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:13:00.379195 2255187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:13:00.379496 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:13:00.379604 2255187 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:13:00.379714 2255187 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-235462"
	I0911 12:13:00.379735 2255187 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-235462"
	W0911 12:13:00.379744 2255187 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:13:00.379770 2255187 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:13:00.379813 2255187 addons.go:69] Setting default-storageclass=true in profile "embed-certs-235462"
	I0911 12:13:00.379829 2255187 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-235462"
	I0911 12:13:00.379872 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.380021 2255187 addons.go:69] Setting metrics-server=true in profile "embed-certs-235462"
	I0911 12:13:00.380038 2255187 addons.go:231] Setting addon metrics-server=true in "embed-certs-235462"
	W0911 12:13:00.380053 2255187 addons.go:240] addon metrics-server should already be in state true
	I0911 12:13:00.380092 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.380276 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380299 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.380314 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380338 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.380443 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380464 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.400206 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0911 12:13:00.400222 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0911 12:13:00.400384 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35671
	I0911 12:13:00.400955 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.400990 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.400957 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.401597 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.401619 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.401749 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.401769 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.402081 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402237 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.402249 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.402314 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402602 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402785 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.402950 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.402972 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.402986 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.403016 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.424319 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I0911 12:13:00.424352 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I0911 12:13:00.424991 2255187 addons.go:231] Setting addon default-storageclass=true in "embed-certs-235462"
	W0911 12:13:00.425015 2255187 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:13:00.425039 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.425053 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.425387 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.425471 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.425496 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.425891 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.425904 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.426206 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.426222 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.426644 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.426842 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.428151 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.429014 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.431494 2255187 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:13:00.429852 2255187 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-235462" context rescaled to 1 replicas
	I0911 12:13:00.430039 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.433081 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:13:00.433096 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:13:00.433121 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.433184 2255187 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:13:00.438048 2255187 out.go:177] * Verifying Kubernetes components...
	I0911 12:13:00.436324 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.437532 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.438207 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.442076 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:00.442211 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.442240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.443931 2255187 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:13:00.442451 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.445563 2255187 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:13:00.445579 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.445583 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:13:00.445606 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.445674 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.449267 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0911 12:13:00.449534 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.449823 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.450240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.450270 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.450451 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.450818 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.450838 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.450906 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.451120 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.451298 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.452043 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.452652 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.452686 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.470652 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0911 12:13:00.471240 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.471865 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.471888 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.472326 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.472745 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.474485 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.475072 2255187 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:13:00.475093 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:13:00.475123 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.478333 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.478757 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.478788 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.478949 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.479157 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.479301 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.479434 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.601913 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:13:00.601946 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:13:00.629483 2255187 node_ready.go:35] waiting up to 6m0s for node "embed-certs-235462" to be "Ready" ...
	I0911 12:13:00.629938 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 12:13:00.651067 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:13:00.653504 2255187 node_ready.go:49] node "embed-certs-235462" has status "Ready":"True"
	I0911 12:13:00.653549 2255187 node_ready.go:38] duration metric: took 24.023395ms waiting for node "embed-certs-235462" to be "Ready" ...
	I0911 12:13:00.653564 2255187 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:00.663033 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:13:00.663075 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:13:00.668515 2255187 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.709787 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:13:00.751534 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:13:00.751565 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:13:00.782859 2255187 pod_ready.go:92] pod "etcd-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:00.782894 2255187 pod_ready.go:81] duration metric: took 114.332855ms waiting for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.782910 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.823512 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:13:00.891619 2255187 pod_ready.go:92] pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:00.891678 2255187 pod_ready.go:81] duration metric: took 108.758908ms waiting for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.891695 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.001447 2255187 pod_ready.go:92] pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:01.001483 2255187 pod_ready.go:81] duration metric: took 109.778603ms waiting for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.001501 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.164166 2255187 pod_ready.go:92] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:01.164205 2255187 pod_ready.go:81] duration metric: took 162.694687ms waiting for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.164216 2255187 pod_ready.go:38] duration metric: took 510.637428ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:01.164239 2255187 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:13:01.164300 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:12:59.081781 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:59.267524 2255814 pod_ready.go:81] duration metric: took 4m0.000791617s waiting for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	E0911 12:12:59.267566 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:12:59.267580 2255814 pod_ready.go:38] duration metric: took 4m2.605912471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:12:59.267603 2255814 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:12:59.267645 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:12:59.267855 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:12:59.332014 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:12:59.332042 2255814 cri.go:89] found id: ""
	I0911 12:12:59.332053 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:12:59.332135 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.338400 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:12:59.338493 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:12:59.373232 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:12:59.373284 2255814 cri.go:89] found id: ""
	I0911 12:12:59.373296 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:12:59.373371 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.379199 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:12:59.379288 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:12:59.415804 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:12:59.415840 2255814 cri.go:89] found id: ""
	I0911 12:12:59.415852 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:12:59.415940 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.422256 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:12:59.422343 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:12:59.462300 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:12:59.462327 2255814 cri.go:89] found id: ""
	I0911 12:12:59.462336 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:12:59.462392 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.467244 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:12:59.467364 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:12:59.499594 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:12:59.499619 2255814 cri.go:89] found id: ""
	I0911 12:12:59.499627 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:12:59.499697 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.504481 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:12:59.504570 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:12:59.536588 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:12:59.536620 2255814 cri.go:89] found id: ""
	I0911 12:12:59.536631 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:12:59.536701 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.541454 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:12:59.541529 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:12:59.577953 2255814 cri.go:89] found id: ""
	I0911 12:12:59.577990 2255814 logs.go:284] 0 containers: []
	W0911 12:12:59.578001 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:12:59.578010 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:12:59.578084 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:12:59.616256 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:12:59.616283 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:12:59.616288 2255814 cri.go:89] found id: ""
	I0911 12:12:59.616296 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:12:59.616350 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.621818 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.627431 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:12:59.627462 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:12:59.690633 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:12:59.690681 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:12:59.733084 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:12:59.733133 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:12:59.775174 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:12:59.775220 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:12:59.829438 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:12:59.829492 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:12:59.894842 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:12:59.894888 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:12:59.936662 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:12:59.936703 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:12:59.955507 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:12:59.955544 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:00.127082 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:00.127129 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:00.178458 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:00.178501 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:00.226759 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:00.226805 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:00.267586 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:00.267637 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:00.311431 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:00.311465 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:12:59.276905 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:01.775061 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:02.733813 2255187 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.103819607s)
	I0911 12:13:02.733859 2255187 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0911 12:13:03.298110 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.646997747s)
	I0911 12:13:03.298169 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298179 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298209 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.588380755s)
	I0911 12:13:03.298256 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298278 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298545 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298566 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.298577 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298586 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298596 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298611 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.298622 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298834 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298851 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.298891 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298904 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.299077 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.299104 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.299117 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.299127 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.299083 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.299459 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.299474 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.485702 2255187 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.321356388s)
	I0911 12:13:03.485741 2255187 api_server.go:72] duration metric: took 3.052522714s to wait for apiserver process to appear ...
	I0911 12:13:03.485748 2255187 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:13:03.485768 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:13:03.485987 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.66240811s)
	I0911 12:13:03.486070 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.486090 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.486553 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.486621 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.486642 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.486666 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.486683 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.486940 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.486956 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.486968 2255187 addons.go:467] Verifying addon metrics-server=true in "embed-certs-235462"
	I0911 12:13:03.489450 2255187 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0911 12:13:03.491514 2255187 addons.go:502] enable addons completed in 3.11190942s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0911 12:13:03.571696 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 200:
	ok
	I0911 12:13:03.576690 2255187 api_server.go:141] control plane version: v1.28.1
	I0911 12:13:03.576730 2255187 api_server.go:131] duration metric: took 90.974437ms to wait for apiserver health ...
	I0911 12:13:03.576743 2255187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:13:03.592687 2255187 system_pods.go:59] 8 kube-system pods found
	I0911 12:13:03.592734 2255187 system_pods.go:61] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.592745 2255187 system_pods.go:61] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.592753 2255187 system_pods.go:61] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.592761 2255187 system_pods.go:61] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.592769 2255187 system_pods.go:61] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.592778 2255187 system_pods.go:61] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.592787 2255187 system_pods.go:61] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.592802 2255187 system_pods.go:61] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.592839 2255187 system_pods.go:74] duration metric: took 16.087864ms to wait for pod list to return data ...
	I0911 12:13:03.592855 2255187 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:13:03.606427 2255187 default_sa.go:45] found service account: "default"
	I0911 12:13:03.606517 2255187 default_sa.go:55] duration metric: took 13.6536ms for default service account to be created ...
	I0911 12:13:03.606542 2255187 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:13:03.622692 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:03.622752 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.622765 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.622777 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.622786 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.622801 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.622814 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.622980 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.623076 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.623157 2255187 retry.go:31] will retry after 240.25273ms: missing components: kube-dns, kube-proxy
	I0911 12:13:03.874980 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:03.875031 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.875041 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.875048 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.875081 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.875094 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.875104 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.875118 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.875130 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.875163 2255187 retry.go:31] will retry after 285.300702ms: missing components: kube-dns, kube-proxy
	I0911 12:13:04.171503 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:04.171548 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:04.171558 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:04.171566 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:04.171574 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:04.171580 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:04.171587 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:04.171598 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:04.171607 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:04.171632 2255187 retry.go:31] will retry after 386.395514ms: missing components: kube-dns, kube-proxy
	I0911 12:13:04.565931 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:04.565972 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:04.565982 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:04.565991 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:04.565998 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:04.566007 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:04.566015 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:04.566025 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:04.566039 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:04.566062 2255187 retry.go:31] will retry after 526.673ms: missing components: kube-dns, kube-proxy
	I0911 12:13:05.104101 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:05.104230 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:05.104257 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:05.104277 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:05.104294 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:05.104312 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:05.104336 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:05.104353 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:05.104363 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:05.104385 2255187 retry.go:31] will retry after 628.795734ms: missing components: kube-dns, kube-proxy
	I0911 12:13:05.745358 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:05.745392 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Running
	I0911 12:13:05.745400 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:05.745408 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:05.745416 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:05.745421 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Running
	I0911 12:13:05.745427 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:05.745440 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:05.745451 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:05.745463 2255187 system_pods.go:126] duration metric: took 2.138903103s to wait for k8s-apps to be running ...
	I0911 12:13:05.745480 2255187 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:13:05.745540 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:05.762725 2255187 system_svc.go:56] duration metric: took 17.229678ms WaitForService to wait for kubelet.
	I0911 12:13:05.762766 2255187 kubeadm.go:581] duration metric: took 5.329544538s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:13:05.762793 2255187 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:13:05.767056 2255187 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:13:05.767087 2255187 node_conditions.go:123] node cpu capacity is 2
	I0911 12:13:05.767112 2255187 node_conditions.go:105] duration metric: took 4.314286ms to run NodePressure ...
	I0911 12:13:05.767131 2255187 start.go:228] waiting for startup goroutines ...
	I0911 12:13:05.767138 2255187 start.go:233] waiting for cluster config update ...
	I0911 12:13:05.767147 2255187 start.go:242] writing updated cluster config ...
	I0911 12:13:05.767462 2255187 ssh_runner.go:195] Run: rm -f paused
	I0911 12:13:05.823796 2255187 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:13:05.826336 2255187 out.go:177] * Done! kubectl is now configured to use "embed-certs-235462" cluster and "default" namespace by default
	I0911 12:13:03.450576 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:13:03.472433 2255814 api_server.go:72] duration metric: took 4m14.685379298s to wait for apiserver process to appear ...
	I0911 12:13:03.472469 2255814 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:13:03.472520 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:13:03.472614 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:13:03.515433 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:03.515471 2255814 cri.go:89] found id: ""
	I0911 12:13:03.515483 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:13:03.515560 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.521654 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:13:03.521745 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:13:03.569379 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:03.569406 2255814 cri.go:89] found id: ""
	I0911 12:13:03.569416 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:13:03.569481 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.574638 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:13:03.574723 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:13:03.610693 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:03.610722 2255814 cri.go:89] found id: ""
	I0911 12:13:03.610733 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:13:03.610794 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.615774 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:13:03.615894 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:13:03.657087 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:03.657117 2255814 cri.go:89] found id: ""
	I0911 12:13:03.657129 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:13:03.657211 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.662224 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:13:03.662315 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:13:03.698282 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:03.698359 2255814 cri.go:89] found id: ""
	I0911 12:13:03.698381 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:13:03.698466 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.704160 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:13:03.704246 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:13:03.748122 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:03.748152 2255814 cri.go:89] found id: ""
	I0911 12:13:03.748162 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:13:03.748238 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.752657 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:13:03.752742 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:13:03.786815 2255814 cri.go:89] found id: ""
	I0911 12:13:03.786853 2255814 logs.go:284] 0 containers: []
	W0911 12:13:03.786863 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:13:03.786871 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:13:03.786942 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:13:03.824384 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:03.824409 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:03.824414 2255814 cri.go:89] found id: ""
	I0911 12:13:03.824421 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:13:03.824497 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.830317 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.836320 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:03.836355 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:03.887480 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:13:03.887524 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:03.930466 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:13:03.930507 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:03.966522 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:13:03.966563 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:13:04.026111 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:13:04.026168 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:13:04.045422 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:13:04.045468 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:04.185127 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:04.185179 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:04.235047 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:04.235089 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:13:04.856084 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:13:04.856134 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:13:04.903388 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:04.903433 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:04.964861 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:04.964916 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:05.007565 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:13:05.007605 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:05.069630 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:13:05.069676 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:07.608676 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:13:07.615388 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 200:
	ok
	I0911 12:13:07.617076 2255814 api_server.go:141] control plane version: v1.28.1
	I0911 12:13:07.617101 2255814 api_server.go:131] duration metric: took 4.14462443s to wait for apiserver health ...
	I0911 12:13:07.617110 2255814 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:13:07.617138 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:13:07.617196 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:13:07.656726 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:07.656750 2255814 cri.go:89] found id: ""
	I0911 12:13:07.656760 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:13:07.656850 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.661277 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:13:07.661364 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:13:07.697717 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:07.697746 2255814 cri.go:89] found id: ""
	I0911 12:13:07.697754 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:13:07.697842 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.703800 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:13:07.703888 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:13:07.747003 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:07.747033 2255814 cri.go:89] found id: ""
	I0911 12:13:07.747043 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:13:07.747122 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.751932 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:13:07.752007 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:13:07.785348 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:07.785375 2255814 cri.go:89] found id: ""
	I0911 12:13:07.785386 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:13:07.785460 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.790170 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:13:07.790237 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:13:07.827467 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:07.827496 2255814 cri.go:89] found id: ""
	I0911 12:13:07.827510 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:13:07.827583 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.834478 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:13:07.834552 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:13:07.873739 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:07.873766 2255814 cri.go:89] found id: ""
	I0911 12:13:07.873774 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:13:07.873828 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.878424 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:13:07.878528 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:13:07.916665 2255814 cri.go:89] found id: ""
	I0911 12:13:07.916696 2255814 logs.go:284] 0 containers: []
	W0911 12:13:07.916708 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:13:07.916716 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:13:07.916780 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:13:07.950146 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:07.950172 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:07.950177 2255814 cri.go:89] found id: ""
	I0911 12:13:07.950185 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:13:07.950256 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.954996 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.959157 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:07.959189 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:08.027081 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:13:08.027112 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:03.775843 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:06.274500 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:08.079481 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:13:08.079522 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:08.118655 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:13:08.118696 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:13:08.177644 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:13:08.177690 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:13:08.192495 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:13:08.192524 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:08.338344 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:08.338388 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:08.385409 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:08.385454 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:08.420999 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:13:08.421033 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:08.457183 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:13:08.457223 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:13:08.500499 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:08.500531 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:08.550546 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:13:08.550587 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:08.584802 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:08.584854 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:13:11.626627 2255814 system_pods.go:59] 8 kube-system pods found
	I0911 12:13:11.626661 2255814 system_pods.go:61] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running
	I0911 12:13:11.626666 2255814 system_pods.go:61] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running
	I0911 12:13:11.626670 2255814 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running
	I0911 12:13:11.626675 2255814 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running
	I0911 12:13:11.626679 2255814 system_pods.go:61] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running
	I0911 12:13:11.626683 2255814 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running
	I0911 12:13:11.626690 2255814 system_pods.go:61] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:11.626696 2255814 system_pods.go:61] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running
	I0911 12:13:11.626702 2255814 system_pods.go:74] duration metric: took 4.009586477s to wait for pod list to return data ...
	I0911 12:13:11.626710 2255814 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:13:11.630703 2255814 default_sa.go:45] found service account: "default"
	I0911 12:13:11.630735 2255814 default_sa.go:55] duration metric: took 4.019315ms for default service account to be created ...
	I0911 12:13:11.630747 2255814 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:13:11.637643 2255814 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:11.637681 2255814 system_pods.go:89] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running
	I0911 12:13:11.637687 2255814 system_pods.go:89] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running
	I0911 12:13:11.637693 2255814 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running
	I0911 12:13:11.637697 2255814 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running
	I0911 12:13:11.637701 2255814 system_pods.go:89] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running
	I0911 12:13:11.637706 2255814 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running
	I0911 12:13:11.637713 2255814 system_pods.go:89] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:11.637720 2255814 system_pods.go:89] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running
	I0911 12:13:11.637727 2255814 system_pods.go:126] duration metric: took 6.974046ms to wait for k8s-apps to be running ...
	I0911 12:13:11.637734 2255814 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:13:11.637781 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:11.656267 2255814 system_svc.go:56] duration metric: took 18.513073ms WaitForService to wait for kubelet.
	I0911 12:13:11.656313 2255814 kubeadm.go:581] duration metric: took 4m22.869270451s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:13:11.656342 2255814 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:13:11.660206 2255814 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:13:11.660242 2255814 node_conditions.go:123] node cpu capacity is 2
	I0911 12:13:11.660256 2255814 node_conditions.go:105] duration metric: took 3.907675ms to run NodePressure ...
	I0911 12:13:11.660271 2255814 start.go:228] waiting for startup goroutines ...
	I0911 12:13:11.660281 2255814 start.go:233] waiting for cluster config update ...
	I0911 12:13:11.660295 2255814 start.go:242] writing updated cluster config ...
	I0911 12:13:11.660673 2255814 ssh_runner.go:195] Run: rm -f paused
	I0911 12:13:11.716963 2255814 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:13:11.719502 2255814 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-484027" cluster and "default" namespace by default
	I0911 12:13:08.774412 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:10.776103 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:13.273773 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:15.274785 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:17.776143 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:20.274491 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:22.276115 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:24.776008 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:26.776415 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:29.274644 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:31.774477 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:33.774923 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:35.776441 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:37.777677 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:38.087732 2255048 pod_ready.go:81] duration metric: took 4m0.000743055s waiting for pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace to be "Ready" ...
	E0911 12:13:38.087774 2255048 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:13:38.087805 2255048 pod_ready.go:38] duration metric: took 4m11.950533095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:38.087877 2255048 kubeadm.go:640] restartCluster took 4m32.29342443s
	W0911 12:13:38.087958 2255048 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0911 12:13:38.088001 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0911 12:14:10.169576 2255048 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.081486969s)
	I0911 12:14:10.169706 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:10.189300 2255048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:14:10.202385 2255048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:14:10.213749 2255048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:14:10.213816 2255048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:14:10.279484 2255048 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 12:14:10.279634 2255048 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 12:14:10.462302 2255048 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 12:14:10.462488 2255048 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 12:14:10.462634 2255048 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 12:14:10.659475 2255048 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 12:14:10.661923 2255048 out.go:204]   - Generating certificates and keys ...
	I0911 12:14:10.662086 2255048 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 12:14:10.662142 2255048 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 12:14:10.662223 2255048 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0911 12:14:10.662303 2255048 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0911 12:14:10.663973 2255048 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0911 12:14:10.665836 2255048 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0911 12:14:10.667292 2255048 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0911 12:14:10.668584 2255048 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0911 12:14:10.669931 2255048 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0911 12:14:10.670570 2255048 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0911 12:14:10.671008 2255048 kubeadm.go:322] [certs] Using the existing "sa" key
	I0911 12:14:10.671087 2255048 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 12:14:10.865541 2255048 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 12:14:11.063586 2255048 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 12:14:11.341833 2255048 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 12:14:11.573561 2255048 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 12:14:11.574128 2255048 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 12:14:11.577101 2255048 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 12:14:11.579311 2255048 out.go:204]   - Booting up control plane ...
	I0911 12:14:11.579427 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 12:14:11.579550 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 12:14:11.579644 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 12:14:11.598440 2255048 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 12:14:11.599446 2255048 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 12:14:11.599531 2255048 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 12:14:11.738771 2255048 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 12:14:21.243059 2255048 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503809 seconds
	I0911 12:14:21.243215 2255048 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:14:21.262148 2255048 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:14:21.802567 2255048 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:14:21.802822 2255048 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-352076 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:14:22.320035 2255048 kubeadm.go:322] [bootstrap-token] Using token: 3xtym4.6ytyj76o1n15fsq8
	I0911 12:14:22.321759 2255048 out.go:204]   - Configuring RBAC rules ...
	I0911 12:14:22.321922 2255048 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:14:22.329851 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:14:22.344882 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:14:22.349640 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:14:22.354357 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:14:22.359463 2255048 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:14:22.380068 2255048 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:14:22.713378 2255048 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:14:22.780207 2255048 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:14:22.780252 2255048 kubeadm.go:322] 
	I0911 12:14:22.780331 2255048 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:14:22.780344 2255048 kubeadm.go:322] 
	I0911 12:14:22.780441 2255048 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:14:22.780450 2255048 kubeadm.go:322] 
	I0911 12:14:22.780489 2255048 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:14:22.780568 2255048 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:14:22.780648 2255048 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:14:22.780657 2255048 kubeadm.go:322] 
	I0911 12:14:22.780757 2255048 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:14:22.780791 2255048 kubeadm.go:322] 
	I0911 12:14:22.780876 2255048 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:14:22.780895 2255048 kubeadm.go:322] 
	I0911 12:14:22.780958 2255048 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:14:22.781054 2255048 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:14:22.781157 2255048 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:14:22.781168 2255048 kubeadm.go:322] 
	I0911 12:14:22.781264 2255048 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:14:22.781363 2255048 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:14:22.781374 2255048 kubeadm.go:322] 
	I0911 12:14:22.781490 2255048 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3xtym4.6ytyj76o1n15fsq8 \
	I0911 12:14:22.781618 2255048 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:14:22.781684 2255048 kubeadm.go:322] 	--control-plane 
	I0911 12:14:22.781695 2255048 kubeadm.go:322] 
	I0911 12:14:22.781813 2255048 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:14:22.781830 2255048 kubeadm.go:322] 
	I0911 12:14:22.781956 2255048 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3xtym4.6ytyj76o1n15fsq8 \
	I0911 12:14:22.782107 2255048 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:14:22.783393 2255048 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:14:22.783423 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:14:22.783434 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:14:22.785623 2255048 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:14:22.787278 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:14:22.817914 2255048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:14:22.857165 2255048 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:14:22.857266 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:22.857282 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=no-preload-352076 minikube.k8s.io/updated_at=2023_09_11T12_14_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:23.375677 2255048 ops.go:34] apiserver oom_adj: -16
	I0911 12:14:23.375731 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:23.497980 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:24.128149 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:24.627110 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:25.127658 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:25.627595 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:26.127143 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:26.627803 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:27.128061 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:27.627169 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:28.128081 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:28.628055 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:29.127187 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:29.627707 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:30.127233 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:30.627943 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:31.127222 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:31.627921 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:32.127760 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:32.628112 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:33.128107 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:33.627835 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:34.127171 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:34.627113 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:35.127499 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:35.627255 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:36.127199 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:36.314187 2255048 kubeadm.go:1081] duration metric: took 13.456994708s to wait for elevateKubeSystemPrivileges.
	I0911 12:14:36.314241 2255048 kubeadm.go:406] StartCluster complete in 5m30.569752421s
	I0911 12:14:36.314272 2255048 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:14:36.314446 2255048 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:14:36.317402 2255048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:14:36.317739 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:14:36.318031 2255048 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:14:36.317936 2255048 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:14:36.318110 2255048 addons.go:69] Setting storage-provisioner=true in profile "no-preload-352076"
	I0911 12:14:36.318135 2255048 addons.go:231] Setting addon storage-provisioner=true in "no-preload-352076"
	I0911 12:14:36.318137 2255048 addons.go:69] Setting default-storageclass=true in profile "no-preload-352076"
	I0911 12:14:36.318148 2255048 addons.go:69] Setting metrics-server=true in profile "no-preload-352076"
	I0911 12:14:36.318163 2255048 addons.go:231] Setting addon metrics-server=true in "no-preload-352076"
	I0911 12:14:36.318164 2255048 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-352076"
	W0911 12:14:36.318169 2255048 addons.go:240] addon metrics-server should already be in state true
	I0911 12:14:36.318218 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	W0911 12:14:36.318143 2255048 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:14:36.318318 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	I0911 12:14:36.318696 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318710 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318720 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318723 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.318738 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.318741 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.337905 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0911 12:14:36.338002 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0911 12:14:36.338589 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.338678 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.339313 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.339317 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.339340 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.339363 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.339435 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0911 12:14:36.339903 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.339909 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.339981 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.340160 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.340463 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.340496 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.340588 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.340617 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.341051 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.341512 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.341540 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.359712 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0911 12:14:36.360342 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.360914 2255048 addons.go:231] Setting addon default-storageclass=true in "no-preload-352076"
	W0911 12:14:36.360941 2255048 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:14:36.360969 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	I0911 12:14:36.360969 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.360984 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.361238 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.361271 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.361350 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.361540 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.362624 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0911 12:14:36.363381 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.363731 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.364093 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.364114 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.366385 2255048 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:14:36.364716 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.368526 2255048 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:14:36.368557 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:14:36.368640 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.368799 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.371211 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.374123 2255048 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:14:36.373727 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.374507 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.376914 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.376951 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.376846 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:14:36.376970 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:14:36.376991 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.377194 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.377424 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.377656 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.380757 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.381482 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.381508 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.381537 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.381783 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.381953 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.382098 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.383003 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42733
	I0911 12:14:36.383415 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.383860 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.383884 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.384174 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.384600 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.384650 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.401421 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
	I0911 12:14:36.401987 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.402660 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.402684 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.403172 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.403456 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.406003 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.406531 2255048 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:14:36.406567 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:14:36.406593 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.410520 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.411016 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.411072 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.411331 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.411517 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.411723 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.411895 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.448234 2255048 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-352076" context rescaled to 1 replicas
	I0911 12:14:36.448281 2255048 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:14:36.450615 2255048 out.go:177] * Verifying Kubernetes components...
	I0911 12:14:36.452566 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:36.600188 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 12:14:36.600187 2255048 node_ready.go:35] waiting up to 6m0s for node "no-preload-352076" to be "Ready" ...
	I0911 12:14:36.611125 2255048 node_ready.go:49] node "no-preload-352076" has status "Ready":"True"
	I0911 12:14:36.611167 2255048 node_ready.go:38] duration metric: took 10.942009ms waiting for node "no-preload-352076" to be "Ready" ...
	I0911 12:14:36.611181 2255048 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:14:36.632729 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:14:36.632759 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:14:36.640639 2255048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:36.656421 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:14:36.659146 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:14:36.711603 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:14:36.711644 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:14:36.780574 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:14:36.780614 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:14:36.874964 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:14:38.569895 2255048 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.969647165s)
	I0911 12:14:38.569949 2255048 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0911 12:14:38.569895 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.91343277s)
	I0911 12:14:38.570001 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570017 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.570428 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.570469 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.570484 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570440 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.570495 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.570786 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.570801 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.570803 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.570820 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570830 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.571133 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.571183 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.571196 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.756212 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:14:39.258501 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.599303563s)
	I0911 12:14:39.258567 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.258581 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.258631 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.383622497s)
	I0911 12:14:39.258693 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.258713 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259000 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259069 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259129 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259139 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.259040 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259150 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259154 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259165 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.259178 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259042 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259444 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259444 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259468 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259514 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259605 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259620 2255048 addons.go:467] Verifying addon metrics-server=true in "no-preload-352076"
	I0911 12:14:39.261573 2255048 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0911 12:14:39.263513 2255048 addons.go:502] enable addons completed in 2.945573816s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0911 12:14:41.194698 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:14:41.682872 2255048 pod_ready.go:92] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.682904 2255048 pod_ready.go:81] duration metric: took 5.042231142s waiting for pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.682919 2255048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.685265 2255048 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ddttk" not found
	I0911 12:14:41.685295 2255048 pod_ready.go:81] duration metric: took 2.370305ms waiting for pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace to be "Ready" ...
	E0911 12:14:41.685306 2255048 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ddttk" not found
	I0911 12:14:41.685313 2255048 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.694255 2255048 pod_ready.go:92] pod "etcd-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.694295 2255048 pod_ready.go:81] duration metric: took 8.974837ms waiting for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.694309 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.700807 2255048 pod_ready.go:92] pod "kube-apiserver-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.700854 2255048 pod_ready.go:81] duration metric: took 6.536644ms waiting for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.700869 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.707895 2255048 pod_ready.go:92] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.707918 2255048 pod_ready.go:81] duration metric: took 7.041207ms waiting for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.707930 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5w2x" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.880293 2255048 pod_ready.go:92] pod "kube-proxy-f5w2x" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.880329 2255048 pod_ready.go:81] duration metric: took 172.39121ms waiting for pod "kube-proxy-f5w2x" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.880345 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:42.280038 2255048 pod_ready.go:92] pod "kube-scheduler-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:42.280066 2255048 pod_ready.go:81] duration metric: took 399.713688ms waiting for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:42.280074 2255048 pod_ready.go:38] duration metric: took 5.668879257s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:14:42.280093 2255048 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:14:42.280143 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:14:42.303868 2255048 api_server.go:72] duration metric: took 5.855535753s to wait for apiserver process to appear ...
	I0911 12:14:42.303906 2255048 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:14:42.303927 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:14:42.310890 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
	ok
	I0911 12:14:42.313428 2255048 api_server.go:141] control plane version: v1.28.1
	I0911 12:14:42.313455 2255048 api_server.go:131] duration metric: took 9.541682ms to wait for apiserver health ...
	I0911 12:14:42.313464 2255048 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:14:42.483863 2255048 system_pods.go:59] 8 kube-system pods found
	I0911 12:14:42.483895 2255048 system_pods.go:61] "coredns-5dd5756b68-6w2w7" [fe585a8f-a92f-4497-b399-d759c995f9e6] Running
	I0911 12:14:42.483900 2255048 system_pods.go:61] "etcd-no-preload-352076" [46c6483c-1301-4aa4-9851-b66116ac5b8a] Running
	I0911 12:14:42.483905 2255048 system_pods.go:61] "kube-apiserver-no-preload-352076" [a01ff260-818b-46a3-99c2-c5e4f8fa2610] Running
	I0911 12:14:42.483909 2255048 system_pods.go:61] "kube-controller-manager-no-preload-352076" [0f697ec3-cabf-41f8-bdb0-8b39a70deafd] Running
	I0911 12:14:42.483912 2255048 system_pods.go:61] "kube-proxy-f5w2x" [03e8a2b5-aaf8-4fd7-920e-033a44729398] Running
	I0911 12:14:42.483916 2255048 system_pods.go:61] "kube-scheduler-no-preload-352076" [ddbb31d1-9bc6-4466-bd7b-81f433959677] Running
	I0911 12:14:42.483923 2255048 system_pods.go:61] "metrics-server-57f55c9bc5-r8mgg" [a54edaa0-b800-48f3-99bc-7d38adb834d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:14:42.483930 2255048 system_pods.go:61] "storage-provisioner" [c5d1acfb-fa11-4a73-9176-21aee3e2ab99] Running
	I0911 12:14:42.483936 2255048 system_pods.go:74] duration metric: took 170.467243ms to wait for pod list to return data ...
	I0911 12:14:42.483945 2255048 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:14:42.679235 2255048 default_sa.go:45] found service account: "default"
	I0911 12:14:42.679270 2255048 default_sa.go:55] duration metric: took 195.319105ms for default service account to be created ...
	I0911 12:14:42.679284 2255048 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:14:42.883048 2255048 system_pods.go:86] 8 kube-system pods found
	I0911 12:14:42.883078 2255048 system_pods.go:89] "coredns-5dd5756b68-6w2w7" [fe585a8f-a92f-4497-b399-d759c995f9e6] Running
	I0911 12:14:42.883084 2255048 system_pods.go:89] "etcd-no-preload-352076" [46c6483c-1301-4aa4-9851-b66116ac5b8a] Running
	I0911 12:14:42.883089 2255048 system_pods.go:89] "kube-apiserver-no-preload-352076" [a01ff260-818b-46a3-99c2-c5e4f8fa2610] Running
	I0911 12:14:42.883093 2255048 system_pods.go:89] "kube-controller-manager-no-preload-352076" [0f697ec3-cabf-41f8-bdb0-8b39a70deafd] Running
	I0911 12:14:42.883097 2255048 system_pods.go:89] "kube-proxy-f5w2x" [03e8a2b5-aaf8-4fd7-920e-033a44729398] Running
	I0911 12:14:42.883103 2255048 system_pods.go:89] "kube-scheduler-no-preload-352076" [ddbb31d1-9bc6-4466-bd7b-81f433959677] Running
	I0911 12:14:42.883110 2255048 system_pods.go:89] "metrics-server-57f55c9bc5-r8mgg" [a54edaa0-b800-48f3-99bc-7d38adb834d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:14:42.883118 2255048 system_pods.go:89] "storage-provisioner" [c5d1acfb-fa11-4a73-9176-21aee3e2ab99] Running
	I0911 12:14:42.883126 2255048 system_pods.go:126] duration metric: took 203.835523ms to wait for k8s-apps to be running ...
	I0911 12:14:42.883133 2255048 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:14:42.883181 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:42.897962 2255048 system_svc.go:56] duration metric: took 14.812893ms WaitForService to wait for kubelet.
	I0911 12:14:42.898000 2255048 kubeadm.go:581] duration metric: took 6.449678905s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:14:42.898022 2255048 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:14:43.080859 2255048 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:14:43.080890 2255048 node_conditions.go:123] node cpu capacity is 2
	I0911 12:14:43.080901 2255048 node_conditions.go:105] duration metric: took 182.874167ms to run NodePressure ...
	I0911 12:14:43.080913 2255048 start.go:228] waiting for startup goroutines ...
	I0911 12:14:43.080919 2255048 start.go:233] waiting for cluster config update ...
	I0911 12:14:43.080930 2255048 start.go:242] writing updated cluster config ...
	I0911 12:14:43.081223 2255048 ssh_runner.go:195] Run: rm -f paused
	I0911 12:14:43.135636 2255048 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:14:43.137835 2255048 out.go:177] * Done! kubectl is now configured to use "no-preload-352076" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 12:07:45 UTC, ends at Mon 2023-09-11 12:17:42 UTC. --
	Sep 11 12:17:41 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:41.812880853Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io
.kubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=8f8c0443-4695-41e8-86e7-c06af2e4feec name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.189514491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=64aee59f-de0a-40a8-8351-f394cbf12b09 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.189611167Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=64aee59f-de0a-40a8-8351-f394cbf12b09 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.189896077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=64aee59f-de0a-40a8-8351-f394cbf12b09 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.228345823Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=403c6590-7461-4ee0-9382-324821c4176f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.228450477Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=403c6590-7461-4ee0-9382-324821c4176f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.228678334Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=403c6590-7461-4ee0-9382-324821c4176f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.267755069Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f0619bd8-500e-4b74-8e02-cf4a08e8126f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.267861877Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f0619bd8-500e-4b74-8e02-cf4a08e8126f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.268160591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f0619bd8-500e-4b74-8e02-cf4a08e8126f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.304612825Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=93981e2a-3d88-4b43-a74f-1d73228e3f60 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.304705747Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=93981e2a-3d88-4b43-a74f-1d73228e3f60 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.304907679Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=93981e2a-3d88-4b43-a74f-1d73228e3f60 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.349867886Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a2aa49e8-adba-404c-8bb5-5fdf650bf4ae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.349934114Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a2aa49e8-adba-404c-8bb5-5fdf650bf4ae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.350223982Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a2aa49e8-adba-404c-8bb5-5fdf650bf4ae name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.390490598Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7745ad21-0451-4064-b27b-8375e520904e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.390565281Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7745ad21-0451-4064-b27b-8375e520904e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.390762296Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7745ad21-0451-4064-b27b-8375e520904e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.428036997Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=d96c8994-e91c-4876-a22d-e9c806596aea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.428134172Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=d96c8994-e91c-4876-a22d-e9c806596aea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.428336843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=d96c8994-e91c-4876-a22d-e9c806596aea name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.464283783Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0c2a3eb0-4eb3-4ba2-b439-54cae185cb1e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.464351951Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0c2a3eb0-4eb3-4ba2-b439-54cae185cb1e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:17:42 old-k8s-version-642215 crio[718]: time="2023-09-11 12:17:42.464629874Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0c2a3eb0-4eb3-4ba2-b439-54cae185cb1e name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	252b88d2a887d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      8 minutes ago       Running             storage-provisioner       1                   80174bcc525b2
	86306e2a9af35       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      9 minutes ago       Running             coredns                   0                   9fe8b0836b24b
	397e2be089f5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   9 minutes ago       Running             busybox                   0                   eba2f18ad5b1d
	0100bc00d8805       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      9 minutes ago       Running             kube-proxy                0                   86c017cad5241
	917b7542db061       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   80174bcc525b2
	5b13b1dd138c8       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      9 minutes ago       Running             etcd                      0                   389aca9390241
	5e048369058e0       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      9 minutes ago       Running             kube-scheduler            0                   abb8204cbce23
	3fd47e8d5f66c       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      9 minutes ago       Running             kube-controller-manager   0                   2a390cdc636bc
	aa4b9a425227b       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      9 minutes ago       Running             kube-apiserver            0                   2b7d63c9205b8
	
	* 
	* ==> coredns [86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c] <==
	* E0911 11:59:37.288184       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288739       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288917       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	Trace[1947692426]: [30.001021226s] [30.001021226s] END
	E0911 11:59:37.288184       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288184       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288184       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0911 11:59:37.288648       1 trace.go:82] Trace[1543856988]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-09-11 11:59:07.287620561 +0000 UTC m=+0.030279782) (total time: 30.000974962s):
	Trace[1543856988]: [30.000974962s] [30.000974962s] END
	E0911 11:59:37.288739       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288739       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288739       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0911 11:59:37.288710       1 trace.go:82] Trace[77262156]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-09-11 11:59:07.287381733 +0000 UTC m=+0.030040941) (total time: 30.000718939s):
	Trace[77262156]: [30.000718939s] [30.000718939s] END
	E0911 11:59:37.288917       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288917       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288917       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2023-09-11T12:08:35.531Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	2023-09-11T12:08:35.531Z [INFO] CoreDNS-1.6.2
	2023-09-11T12:08:35.531Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-09-11T12:08:36.541Z [INFO] 127.0.0.1:45893 - 52476 "HINFO IN 3455477577780142367.1809258028112430835. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009990164s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-642215
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-642215
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=old-k8s-version-642215
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_58_48_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:58:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 12:16:58 +0000   Mon, 11 Sep 2023 11:58:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 12:16:58 +0000   Mon, 11 Sep 2023 11:58:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 12:16:58 +0000   Mon, 11 Sep 2023 11:58:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 12:16:58 +0000   Mon, 11 Sep 2023 12:08:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.58
	  Hostname:    old-k8s-version-642215
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 c4a1895d43864ad098ba11bad3a19aef
	 System UUID:                c4a1895d-4386-4ad0-98ba-11bad3a19aef
	 Boot ID:                    f801e2ce-f70e-4d17-aa0d-5cd42b3034dc
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                coredns-5644d7b6d9-55m96                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                etcd-old-k8s-version-642215                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                kube-apiserver-old-k8s-version-642215             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-controller-manager-old-k8s-version-642215    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m15s
	  kube-system                kube-proxy-855lt                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-scheduler-old-k8s-version-642215             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                metrics-server-74d5856cc6-7w6xl                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         8m59s
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From                                Message
	  ----    ------                   ----                   ----                                -------
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)      kubelet, old-k8s-version-642215     Node old-k8s-version-642215 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)      kubelet, old-k8s-version-642215     Node old-k8s-version-642215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)      kubelet, old-k8s-version-642215     Node old-k8s-version-642215 status is now: NodeHasSufficientPID
	  Normal  Starting                 18m                    kube-proxy, old-k8s-version-642215  Starting kube-proxy.
	  Normal  Starting                 9m25s                  kubelet, old-k8s-version-642215     Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m25s (x8 over 9m25s)  kubelet, old-k8s-version-642215     Node old-k8s-version-642215 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m25s (x8 over 9m25s)  kubelet, old-k8s-version-642215     Node old-k8s-version-642215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m25s (x7 over 9m25s)  kubelet, old-k8s-version-642215     Node old-k8s-version-642215 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m25s                  kubelet, old-k8s-version-642215     Updated Node Allocatable limit across pods
	  Normal  Starting                 9m13s                  kube-proxy, old-k8s-version-642215  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep11 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.109713] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.969078] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.735757] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154810] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.480068] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.573276] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.126136] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.189633] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.135092] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.292389] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[Sep11 12:08] systemd-fstab-generator[1041]: Ignoring "noauto" for root device
	[  +0.463410] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +16.984808] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a] <==
	* 2023-09-11 12:08:21.358216 I | embed: listening for metrics on http://192.168.61.58:2381
	2023-09-11 12:08:21.358600 I | embed: listening for metrics on http://127.0.0.1:2381
	2023-09-11 12:08:21.358716 I | etcdserver/membership: added member da8d605abec0c6c9 [https://192.168.61.58:2380] to cluster 2d1820130fad6930
	2023-09-11 12:08:21.359152 N | etcdserver/membership: set the initial cluster version to 3.3
	2023-09-11 12:08:21.359240 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-11 12:08:23.034901 I | raft: da8d605abec0c6c9 is starting a new election at term 2
	2023-09-11 12:08:23.034965 I | raft: da8d605abec0c6c9 became candidate at term 3
	2023-09-11 12:08:23.035063 I | raft: da8d605abec0c6c9 received MsgVoteResp from da8d605abec0c6c9 at term 3
	2023-09-11 12:08:23.035079 I | raft: da8d605abec0c6c9 became leader at term 3
	2023-09-11 12:08:23.035087 I | raft: raft.node: da8d605abec0c6c9 elected leader da8d605abec0c6c9 at term 3
	2023-09-11 12:08:23.035484 I | etcdserver: published {Name:old-k8s-version-642215 ClientURLs:[https://192.168.61.58:2379]} to cluster 2d1820130fad6930
	2023-09-11 12:08:23.035597 I | embed: ready to serve client requests
	2023-09-11 12:08:23.035808 I | embed: ready to serve client requests
	2023-09-11 12:08:23.038737 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-11 12:08:23.038853 I | embed: serving client requests on 192.168.61.58:2379
	2023-09-11 12:08:27.062752 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-642215\" " with result "range_response_count:1 size:3164" took too long (202.255424ms) to execute
	2023-09-11 12:08:27.487950 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-642215\" " with result "range_response_count:1 size:4117" took too long (626.535231ms) to execute
	2023-09-11 12:08:27.494359 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:1 size:120" took too long (633.070733ms) to execute
	2023-09-11 12:08:27.585228 W | etcdserver: request "header:<ID:14324132389981804169 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/old-k8s-version-642215.1783d6d59e60d990\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-642215.1783d6d59e60d990\" value_size:300 lease:5100760353127028357 >> failure:<>>" with result "size:16" took too long (288.451167ms) to execute
	2023-09-11 12:08:27.597862 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-old-k8s-version-642215\" " with result "range_response_count:1 size:2852" took too long (109.159968ms) to execute
	2023-09-11 12:08:27.599229 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-old-k8s-version-642215\" " with result "range_response_count:1 size:2288" took too long (531.498271ms) to execute
	2023-09-11 12:08:27.599898 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (351.172727ms) to execute
	2023-09-11 12:08:27.627214 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (119.160178ms) to execute
	2023-09-11 12:08:27.627465 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (119.702891ms) to execute
	2023-09-11 12:08:57.099478 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:0 size:5" took too long (149.502264ms) to execute
	
	* 
	* ==> kernel <==
	*  12:17:42 up 10 min,  0 users,  load average: 0.22, 0.12, 0.09
	Linux old-k8s-version-642215 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259] <==
	* I0911 12:09:28.559160       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0911 12:09:28.559395       1 handler_proxy.go:99] no RequestInfo found in the context
	E0911 12:09:28.559511       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:09:28.559542       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:11:28.560121       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0911 12:11:28.560561       1 handler_proxy.go:99] no RequestInfo found in the context
	E0911 12:11:28.560689       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:11:28.560736       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:13:27.828712       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0911 12:13:27.829206       1 handler_proxy.go:99] no RequestInfo found in the context
	E0911 12:13:27.829320       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:13:27.829402       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:14:27.829930       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0911 12:14:27.830441       1 handler_proxy.go:99] no RequestInfo found in the context
	E0911 12:14:27.830543       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:14:27.830573       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:16:27.831117       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0911 12:16:27.831680       1 handler_proxy.go:99] no RequestInfo found in the context
	E0911 12:16:27.831851       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:16:27.831902       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d] <==
	* E0911 12:11:15.511544       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:11:26.846265       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:11:45.764480       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:11:58.848625       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:12:16.017437       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:12:30.852413       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:12:46.270555       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:13:02.854572       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:13:16.523087       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:13:34.857128       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:13:46.775360       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:14:06.859636       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:14:17.027771       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:14:38.862196       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:14:47.280278       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:15:10.865458       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:15:17.532666       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:15:42.867664       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:15:47.784783       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:16:14.869840       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:16:18.037151       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:16:46.872877       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:16:48.289461       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	E0911 12:17:18.541425       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:17:18.876416       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	
	* 
	* ==> kube-proxy [0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1] <==
	* W0911 11:59:07.331953       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0911 11:59:07.342518       1 node.go:135] Successfully retrieved node IP: 192.168.61.58
	I0911 11:59:07.342604       1 server_others.go:149] Using iptables Proxier.
	I0911 11:59:07.343266       1 server.go:529] Version: v1.16.0
	I0911 11:59:07.343702       1 config.go:313] Starting service config controller
	I0911 11:59:07.343732       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0911 11:59:07.345320       1 config.go:131] Starting endpoints config controller
	I0911 11:59:07.348510       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0911 11:59:07.444144       1 shared_informer.go:204] Caches are synced for service config 
	I0911 11:59:07.449135       1 shared_informer.go:204] Caches are synced for endpoints config 
	E0911 12:00:21.044182       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=490&timeout=6m54s&timeoutSeconds=414&watch=true: dial tcp 192.168.61.58:8443: connect: connection refused
	E0911 12:00:21.044690       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=491&timeout=9m1s&timeoutSeconds=541&watch=true: dial tcp 192.168.61.58:8443: connect: connection refused
	W0911 12:08:29.006908       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0911 12:08:29.021959       1 node.go:135] Successfully retrieved node IP: 192.168.61.58
	I0911 12:08:29.022112       1 server_others.go:149] Using iptables Proxier.
	I0911 12:08:29.023473       1 server.go:529] Version: v1.16.0
	I0911 12:08:29.024283       1 config.go:131] Starting endpoints config controller
	I0911 12:08:29.024326       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0911 12:08:29.024379       1 config.go:313] Starting service config controller
	I0911 12:08:29.024385       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0911 12:08:29.125077       1 shared_informer.go:204] Caches are synced for service config 
	I0911 12:08:29.125216       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a] <==
	* E0911 11:58:43.367339       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 11:58:44.349215       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 11:58:44.349346       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 11:58:44.349416       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 11:58:44.352263       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 11:58:44.352373       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 11:58:44.373457       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0911 11:58:44.376392       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 11:58:44.376498       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 11:58:44.376557       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 11:58:44.379431       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 11:58:44.379561       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 11:59:04.493588       1 factory.go:585] pod is already present in the activeQ
	E0911 11:59:04.595426       1 factory.go:585] pod is already present in the activeQ
	I0911 12:08:20.259933       1 serving.go:319] Generated self-signed cert in-memory
	W0911 12:08:26.769260       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 12:08:26.769456       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 12:08:26.769581       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 12:08:26.769595       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 12:08:26.782338       1 server.go:143] Version: v1.16.0
	I0911 12:08:26.782569       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0911 12:08:26.787301       1 authorization.go:47] Authorization is disabled
	W0911 12:08:26.787347       1 authentication.go:79] Authentication is disabled
	I0911 12:08:26.787365       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0911 12:08:26.787827       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 12:07:45 UTC, ends at Mon 2023-09-11 12:17:43 UTC. --
	Sep 11 12:13:14 old-k8s-version-642215 kubelet[1047]: E0911 12:13:14.801311    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:13:17 old-k8s-version-642215 kubelet[1047]: E0911 12:13:17.866134    1047 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Sep 11 12:13:26 old-k8s-version-642215 kubelet[1047]: E0911 12:13:26.800924    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:13:41 old-k8s-version-642215 kubelet[1047]: E0911 12:13:41.801782    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:13:56 old-k8s-version-642215 kubelet[1047]: E0911 12:13:56.801160    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:14:10 old-k8s-version-642215 kubelet[1047]: E0911 12:14:10.802527    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:14:22 old-k8s-version-642215 kubelet[1047]: E0911 12:14:22.801878    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:14:34 old-k8s-version-642215 kubelet[1047]: E0911 12:14:34.826217    1047 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 11 12:14:34 old-k8s-version-642215 kubelet[1047]: E0911 12:14:34.826320    1047 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 11 12:14:34 old-k8s-version-642215 kubelet[1047]: E0911 12:14:34.826390    1047 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 11 12:14:34 old-k8s-version-642215 kubelet[1047]: E0911 12:14:34.826437    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Sep 11 12:14:47 old-k8s-version-642215 kubelet[1047]: E0911 12:14:47.812321    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:15:00 old-k8s-version-642215 kubelet[1047]: E0911 12:15:00.801155    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:15:13 old-k8s-version-642215 kubelet[1047]: E0911 12:15:13.802458    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:15:25 old-k8s-version-642215 kubelet[1047]: E0911 12:15:25.801089    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:15:40 old-k8s-version-642215 kubelet[1047]: E0911 12:15:40.801123    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:15:55 old-k8s-version-642215 kubelet[1047]: E0911 12:15:55.802145    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:16:10 old-k8s-version-642215 kubelet[1047]: E0911 12:16:10.801527    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:16:23 old-k8s-version-642215 kubelet[1047]: E0911 12:16:23.806454    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:16:35 old-k8s-version-642215 kubelet[1047]: E0911 12:16:35.801693    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:16:50 old-k8s-version-642215 kubelet[1047]: E0911 12:16:50.801454    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:17:01 old-k8s-version-642215 kubelet[1047]: E0911 12:17:01.802181    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:17:15 old-k8s-version-642215 kubelet[1047]: E0911 12:17:15.801422    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:17:28 old-k8s-version-642215 kubelet[1047]: E0911 12:17:28.801026    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:17:42 old-k8s-version-642215 kubelet[1047]: E0911 12:17:42.801358    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4] <==
	* I0911 12:08:59.221337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 12:08:59.237126       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 12:08:59.237301       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 12:09:16.692256       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 12:09:16.692920       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-642215_f7824995-0433-48d5-8675-51099812bef5!
	I0911 12:09:16.692538       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34515631-2adf-4713-905f-9eb8481301ed", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-642215_f7824995-0433-48d5-8675-51099812bef5 became leader
	I0911 12:09:16.794185       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-642215_f7824995-0433-48d5-8675-51099812bef5!
	
	* 
	* ==> storage-provisioner [917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476] <==
	* I0911 11:59:07.720670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 11:59:07.735303       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 11:59:07.735419       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 11:59:07.749233       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 11:59:07.750185       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-642215_e90f9802-99cb-4fb2-ac8f-8b5869f829c1!
	I0911 11:59:07.749640       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34515631-2adf-4713-905f-9eb8481301ed", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-642215_e90f9802-99cb-4fb2-ac8f-8b5869f829c1 became leader
	I0911 11:59:07.851899       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-642215_e90f9802-99cb-4fb2-ac8f-8b5869f829c1!
	I0911 12:08:28.658365       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0911 12:08:58.666634       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642215 -n old-k8s-version-642215
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-642215 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-7w6xl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-642215 describe pod metrics-server-74d5856cc6-7w6xl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-642215 describe pod metrics-server-74d5856cc6-7w6xl: exit status 1 (76.191583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-7w6xl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-642215 describe pod metrics-server-74d5856cc6-7w6xl: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.4s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-235462 -n embed-certs-235462
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-11 12:22:06.43717302 +0000 UTC m=+5131.749797913
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-235462 -n embed-certs-235462
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-235462 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-235462 logs -n 25: (1.688089868s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 11:57 UTC | 11 Sep 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| ssh     | cert-options-559775 ssh                                | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-559775 -- sudo                         | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559775                                 | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-352076             | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 11:59 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-235462            | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642215        | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:01 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-226537 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	|         | disable-driver-mounts-226537                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:02 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-352076                  | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-484027  | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-235462                 | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642215             | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-484027       | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC | 11 Sep 23 12:13 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 12:04:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 12:04:58.034724 2255814 out.go:296] Setting OutFile to fd 1 ...
	I0911 12:04:58.034920 2255814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:04:58.034929 2255814 out.go:309] Setting ErrFile to fd 2...
	I0911 12:04:58.034933 2255814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:04:58.035102 2255814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 12:04:58.035709 2255814 out.go:303] Setting JSON to false
	I0911 12:04:58.036651 2255814 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":236849,"bootTime":1694197049,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 12:04:58.036727 2255814 start.go:138] virtualization: kvm guest
	I0911 12:04:58.039239 2255814 out.go:177] * [default-k8s-diff-port-484027] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 12:04:58.041110 2255814 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 12:04:58.041181 2255814 notify.go:220] Checking for updates...
	I0911 12:04:58.042795 2255814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 12:04:58.044550 2255814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:04:58.046047 2255814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 12:04:58.047718 2255814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 12:04:58.049343 2255814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 12:04:58.051545 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:04:58.051956 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:04:58.052047 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:04:58.068212 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0911 12:04:58.068649 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:04:58.069289 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:04:58.069318 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:04:58.069763 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:04:58.069987 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:04:58.070276 2255814 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 12:04:58.070629 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:04:58.070670 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:04:58.085941 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0911 12:04:58.086461 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:04:58.086966 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:04:58.086995 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:04:58.087337 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:04:58.087522 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:04:58.126206 2255814 out.go:177] * Using the kvm2 driver based on existing profile
	I0911 12:04:58.127558 2255814 start.go:298] selected driver: kvm2
	I0911 12:04:58.127571 2255814 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:04:58.127716 2255814 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 12:04:58.128430 2255814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:04:58.128555 2255814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 12:04:58.144660 2255814 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 12:04:58.145091 2255814 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 12:04:58.145145 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:04:58.145159 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:04:58.145176 2255814 start_flags.go:321] config:
	{Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-48402
7 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:04:58.145377 2255814 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:04:58.147634 2255814 out.go:177] * Starting control plane node default-k8s-diff-port-484027 in cluster default-k8s-diff-port-484027
	I0911 12:04:56.741109 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:04:58.149438 2255814 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:04:58.149510 2255814 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 12:04:58.149543 2255814 cache.go:57] Caching tarball of preloaded images
	I0911 12:04:58.149650 2255814 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 12:04:58.149664 2255814 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 12:04:58.149825 2255814 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/config.json ...
	I0911 12:04:58.150070 2255814 start.go:365] acquiring machines lock for default-k8s-diff-port-484027: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:04:59.813165 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:05.893188 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:08.965171 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:15.045168 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:18.117188 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:24.197148 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:27.269089 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:33.349151 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:36.421191 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:42.501129 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:45.573209 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:51.653159 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:54.725153 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:00.805201 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:03.877105 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:09.957136 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:13.029119 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:19.109157 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:22.181096 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:28.261156 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:31.333179 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:37.413187 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:40.485239 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:46.565193 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:49.637182 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:55.717194 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:58.789181 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:04.869137 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:07.941096 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:10.946790 2255187 start.go:369] acquired machines lock for "embed-certs-235462" in 4m28.227506413s
	I0911 12:07:10.946859 2255187 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:10.946884 2255187 fix.go:54] fixHost starting: 
	I0911 12:07:10.947279 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:10.947318 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:10.963823 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43713
	I0911 12:07:10.964352 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:10.965050 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:07:10.965086 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:10.965556 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:10.965804 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:10.965995 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:07:10.967759 2255187 fix.go:102] recreateIfNeeded on embed-certs-235462: state=Stopped err=<nil>
	I0911 12:07:10.967790 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	W0911 12:07:10.968000 2255187 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:10.970103 2255187 out.go:177] * Restarting existing kvm2 VM for "embed-certs-235462" ...
	I0911 12:07:10.971879 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Start
	I0911 12:07:10.972130 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring networks are active...
	I0911 12:07:10.973084 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring network default is active
	I0911 12:07:10.973424 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring network mk-embed-certs-235462 is active
	I0911 12:07:10.973888 2255187 main.go:141] libmachine: (embed-certs-235462) Getting domain xml...
	I0911 12:07:10.974726 2255187 main.go:141] libmachine: (embed-certs-235462) Creating domain...
	I0911 12:07:12.246736 2255187 main.go:141] libmachine: (embed-certs-235462) Waiting to get IP...
	I0911 12:07:12.247648 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.248019 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.248152 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.248016 2256167 retry.go:31] will retry after 245.040457ms: waiting for machine to come up
	I0911 12:07:12.494788 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.495311 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.495345 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.495247 2256167 retry.go:31] will retry after 312.634812ms: waiting for machine to come up
	I0911 12:07:10.943345 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:10.943403 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:07:10.946565 2255048 machine.go:91] provisioned docker machine in 4m37.405921901s
	I0911 12:07:10.946641 2255048 fix.go:56] fixHost completed within 4m37.430192233s
	I0911 12:07:10.946648 2255048 start.go:83] releasing machines lock for "no-preload-352076", held for 4m37.430236677s
	W0911 12:07:10.946673 2255048 start.go:672] error starting host: provision: host is not running
	W0911 12:07:10.946819 2255048 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0911 12:07:10.946833 2255048 start.go:687] Will try again in 5 seconds ...
	I0911 12:07:12.810038 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.810461 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.810496 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.810398 2256167 retry.go:31] will retry after 478.036066ms: waiting for machine to come up
	I0911 12:07:13.290252 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:13.290701 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:13.290731 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:13.290646 2256167 retry.go:31] will retry after 576.124591ms: waiting for machine to come up
	I0911 12:07:13.868555 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:13.868978 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:13.869004 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:13.868931 2256167 retry.go:31] will retry after 487.107859ms: waiting for machine to come up
	I0911 12:07:14.357765 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:14.358240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:14.358315 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:14.358173 2256167 retry.go:31] will retry after 903.857312ms: waiting for machine to come up
	I0911 12:07:15.263350 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:15.263852 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:15.263908 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:15.263777 2256167 retry.go:31] will retry after 830.555039ms: waiting for machine to come up
	I0911 12:07:16.096337 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:16.096743 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:16.096774 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:16.096696 2256167 retry.go:31] will retry after 1.307188723s: waiting for machine to come up
	I0911 12:07:17.406129 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:17.406558 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:17.406584 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:17.406512 2256167 retry.go:31] will retry after 1.681887732s: waiting for machine to come up
	I0911 12:07:15.947503 2255048 start.go:365] acquiring machines lock for no-preload-352076: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:07:19.090590 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:19.091013 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:19.091038 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:19.090965 2256167 retry.go:31] will retry after 2.013298988s: waiting for machine to come up
	I0911 12:07:21.105851 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:21.106384 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:21.106418 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:21.106319 2256167 retry.go:31] will retry after 2.714578164s: waiting for machine to come up
	I0911 12:07:23.823181 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:23.823687 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:23.823772 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:23.823623 2256167 retry.go:31] will retry after 2.321779277s: waiting for machine to come up
	I0911 12:07:26.147527 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:26.147956 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:26.147986 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:26.147896 2256167 retry.go:31] will retry after 4.307300197s: waiting for machine to come up
	I0911 12:07:31.786165 2255304 start.go:369] acquired machines lock for "old-k8s-version-642215" in 4m38.564304718s
	I0911 12:07:31.786239 2255304 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:31.786261 2255304 fix.go:54] fixHost starting: 
	I0911 12:07:31.786754 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:31.786809 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:31.806853 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0911 12:07:31.807320 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:31.807871 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:07:31.807906 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:31.808284 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:31.808473 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:31.808622 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:07:31.810311 2255304 fix.go:102] recreateIfNeeded on old-k8s-version-642215: state=Stopped err=<nil>
	I0911 12:07:31.810345 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	W0911 12:07:31.810524 2255304 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:31.813302 2255304 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-642215" ...
	I0911 12:07:30.458075 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.458554 2255187 main.go:141] libmachine: (embed-certs-235462) Found IP for machine: 192.168.50.96
	I0911 12:07:30.458579 2255187 main.go:141] libmachine: (embed-certs-235462) Reserving static IP address...
	I0911 12:07:30.458593 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has current primary IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.459036 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "embed-certs-235462", mac: "52:54:00:2b:a0:6e", ip: "192.168.50.96"} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.459066 2255187 main.go:141] libmachine: (embed-certs-235462) Reserved static IP address: 192.168.50.96
	I0911 12:07:30.459088 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | skip adding static IP to network mk-embed-certs-235462 - found existing host DHCP lease matching {name: "embed-certs-235462", mac: "52:54:00:2b:a0:6e", ip: "192.168.50.96"}
	I0911 12:07:30.459104 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Getting to WaitForSSH function...
	I0911 12:07:30.459117 2255187 main.go:141] libmachine: (embed-certs-235462) Waiting for SSH to be available...
	I0911 12:07:30.461594 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.461938 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.461979 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.462087 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Using SSH client type: external
	I0911 12:07:30.462109 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa (-rw-------)
	I0911 12:07:30.462146 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:07:30.462165 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | About to run SSH command:
	I0911 12:07:30.462200 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | exit 0
	I0911 12:07:30.556976 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | SSH cmd err, output: <nil>: 
	I0911 12:07:30.557370 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetConfigRaw
	I0911 12:07:30.558054 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:30.560898 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.561254 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.561292 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.561638 2255187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/config.json ...
	I0911 12:07:30.561863 2255187 machine.go:88] provisioning docker machine ...
	I0911 12:07:30.561885 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:30.562128 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.562296 2255187 buildroot.go:166] provisioning hostname "embed-certs-235462"
	I0911 12:07:30.562315 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.562497 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.565095 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.565484 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.565519 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.565682 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:30.565852 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.566021 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.566126 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:30.566273 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:30.566796 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:30.566814 2255187 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-235462 && echo "embed-certs-235462" | sudo tee /etc/hostname
	I0911 12:07:30.706262 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-235462
	
	I0911 12:07:30.706294 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.709499 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.709822 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.709862 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.710067 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:30.710331 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.710598 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.710762 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:30.710986 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:30.711479 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:30.711503 2255187 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-235462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-235462/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-235462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:07:30.850084 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:30.850120 2255187 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:07:30.850141 2255187 buildroot.go:174] setting up certificates
	I0911 12:07:30.850155 2255187 provision.go:83] configureAuth start
	I0911 12:07:30.850168 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.850494 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:30.853326 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.853650 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.853680 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.853864 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.856233 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.856574 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.856639 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.856755 2255187 provision.go:138] copyHostCerts
	I0911 12:07:30.856844 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:07:30.856859 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:07:30.856933 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:07:30.857039 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:07:30.857050 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:07:30.857078 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:07:30.857143 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:07:30.857150 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:07:30.857170 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:07:30.857217 2255187 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.embed-certs-235462 san=[192.168.50.96 192.168.50.96 localhost 127.0.0.1 minikube embed-certs-235462]
	I0911 12:07:30.996533 2255187 provision.go:172] copyRemoteCerts
	I0911 12:07:30.996607 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:07:30.996643 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.999950 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.000333 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.000370 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.000514 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.000787 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.000978 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.001133 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.095524 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 12:07:31.121456 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:07:31.145813 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0911 12:07:31.171621 2255187 provision.go:86] duration metric: configureAuth took 321.448095ms
	I0911 12:07:31.171657 2255187 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:07:31.171880 2255187 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:07:31.171975 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.175276 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.175783 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.175819 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.176082 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.176356 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.176535 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.176724 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.177014 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:31.177500 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:31.177521 2255187 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:07:31.514064 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:07:31.514090 2255187 machine.go:91] provisioned docker machine in 952.213137ms
	I0911 12:07:31.514101 2255187 start.go:300] post-start starting for "embed-certs-235462" (driver="kvm2")
	I0911 12:07:31.514135 2255187 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:07:31.514188 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.514651 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:07:31.514705 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.517108 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.517563 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.517599 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.517819 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.518053 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.518243 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.518426 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.612293 2255187 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:07:31.616991 2255187 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:07:31.617022 2255187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:07:31.617143 2255187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:07:31.617263 2255187 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:07:31.617393 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:07:31.627725 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:31.652196 2255187 start.go:303] post-start completed in 138.067305ms
	I0911 12:07:31.652232 2255187 fix.go:56] fixHost completed within 20.705348144s
	I0911 12:07:31.652264 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.655234 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.655598 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.655633 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.655810 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.656000 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.656236 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.656373 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.656547 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:31.657061 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:31.657078 2255187 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:07:31.785981 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434051.730508911
	
	I0911 12:07:31.786019 2255187 fix.go:206] guest clock: 1694434051.730508911
	I0911 12:07:31.786029 2255187 fix.go:219] Guest: 2023-09-11 12:07:31.730508911 +0000 UTC Remote: 2023-09-11 12:07:31.65223725 +0000 UTC m=+289.079171252 (delta=78.271661ms)
	I0911 12:07:31.786076 2255187 fix.go:190] guest clock delta is within tolerance: 78.271661ms
	I0911 12:07:31.786082 2255187 start.go:83] releasing machines lock for "embed-certs-235462", held for 20.839248295s
	I0911 12:07:31.786115 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.786440 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:31.789427 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.789809 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.789844 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.790024 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.790717 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.790954 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.791062 2255187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:07:31.791130 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.791177 2255187 ssh_runner.go:195] Run: cat /version.json
	I0911 12:07:31.791208 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.793991 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794359 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.794393 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794414 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794669 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.794879 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.794871 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.794913 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.795104 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.795112 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.795289 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.795291 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.795468 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.795585 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.910483 2255187 ssh_runner.go:195] Run: systemctl --version
	I0911 12:07:31.916739 2255187 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:07:32.059621 2255187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:07:32.066857 2255187 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:07:32.066955 2255187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:07:32.084365 2255187 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:07:32.084401 2255187 start.go:466] detecting cgroup driver to use...
	I0911 12:07:32.084518 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:07:32.098782 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:07:32.111344 2255187 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:07:32.111421 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:07:32.124323 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:07:32.137910 2255187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:07:32.244478 2255187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:07:32.374160 2255187 docker.go:212] disabling docker service ...
	I0911 12:07:32.374262 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:07:32.387762 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:07:32.401120 2255187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:07:32.522150 2255187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:07:31.815672 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Start
	I0911 12:07:31.815900 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring networks are active...
	I0911 12:07:31.816771 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring network default is active
	I0911 12:07:31.817161 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring network mk-old-k8s-version-642215 is active
	I0911 12:07:31.817559 2255304 main.go:141] libmachine: (old-k8s-version-642215) Getting domain xml...
	I0911 12:07:31.818275 2255304 main.go:141] libmachine: (old-k8s-version-642215) Creating domain...
	I0911 12:07:32.639647 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:07:32.658106 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:07:32.677573 2255187 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:07:32.677658 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.687407 2255187 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:07:32.687499 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.697706 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.707515 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.718090 2255187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:07:32.728668 2255187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:07:32.737652 2255187 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:07:32.737732 2255187 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:07:32.751510 2255187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:07:32.760774 2255187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:07:32.881718 2255187 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:07:33.064736 2255187 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:07:33.064859 2255187 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:07:33.071112 2255187 start.go:534] Will wait 60s for crictl version
	I0911 12:07:33.071195 2255187 ssh_runner.go:195] Run: which crictl
	I0911 12:07:33.075202 2255187 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:07:33.111795 2255187 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:07:33.111898 2255187 ssh_runner.go:195] Run: crio --version
	I0911 12:07:33.162455 2255187 ssh_runner.go:195] Run: crio --version
	I0911 12:07:33.224538 2255187 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:07:33.226156 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:33.229640 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:33.230164 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:33.230202 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:33.230434 2255187 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0911 12:07:33.235232 2255187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:33.248016 2255187 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:07:33.248094 2255187 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:33.290506 2255187 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:07:33.290594 2255187 ssh_runner.go:195] Run: which lz4
	I0911 12:07:33.294802 2255187 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:07:33.299115 2255187 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:07:33.299169 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:07:35.241115 2255187 crio.go:444] Took 1.946355 seconds to copy over tarball
	I0911 12:07:35.241211 2255187 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:07:33.131519 2255304 main.go:141] libmachine: (old-k8s-version-642215) Waiting to get IP...
	I0911 12:07:33.132551 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.133144 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.133255 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.133123 2256281 retry.go:31] will retry after 206.885556ms: waiting for machine to come up
	I0911 12:07:33.341966 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.342472 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.342495 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.342420 2256281 retry.go:31] will retry after 235.74047ms: waiting for machine to come up
	I0911 12:07:33.580161 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.580683 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.580720 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.580644 2256281 retry.go:31] will retry after 407.752379ms: waiting for machine to come up
	I0911 12:07:33.990505 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.991033 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.991099 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.991019 2256281 retry.go:31] will retry after 579.085044ms: waiting for machine to come up
	I0911 12:07:34.571958 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:34.572419 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:34.572451 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:34.572371 2256281 retry.go:31] will retry after 584.464544ms: waiting for machine to come up
	I0911 12:07:35.158152 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:35.158644 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:35.158677 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:35.158579 2256281 retry.go:31] will retry after 750.2868ms: waiting for machine to come up
	I0911 12:07:35.910364 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:35.910949 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:35.910983 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:35.910887 2256281 retry.go:31] will retry after 981.989906ms: waiting for machine to come up
	I0911 12:07:36.894691 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:36.895196 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:36.895233 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:36.895151 2256281 retry.go:31] will retry after 1.082443232s: waiting for machine to come up
	I0911 12:07:37.979265 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:37.979773 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:37.979802 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:37.979699 2256281 retry.go:31] will retry after 1.429811083s: waiting for machine to come up
	I0911 12:07:38.272328 2255187 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.031081597s)
	I0911 12:07:38.272378 2255187 crio.go:451] Took 3.031222 seconds to extract the tarball
	I0911 12:07:38.272392 2255187 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:07:38.314797 2255187 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:38.363925 2255187 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:07:38.363956 2255187 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:07:38.364034 2255187 ssh_runner.go:195] Run: crio config
	I0911 12:07:38.433884 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:07:38.433915 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:07:38.433941 2255187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:07:38.433969 2255187 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.96 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-235462 NodeName:embed-certs-235462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:07:38.434156 2255187 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-235462"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:07:38.434250 2255187 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-235462 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-235462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:07:38.434339 2255187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:07:38.447171 2255187 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:07:38.447273 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:07:38.459426 2255187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0911 12:07:38.478081 2255187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:07:38.495571 2255187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0911 12:07:38.514602 2255187 ssh_runner.go:195] Run: grep 192.168.50.96	control-plane.minikube.internal$ /etc/hosts
	I0911 12:07:38.518616 2255187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:38.531178 2255187 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462 for IP: 192.168.50.96
	I0911 12:07:38.531246 2255187 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:07:38.531410 2255187 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:07:38.531471 2255187 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:07:38.531565 2255187 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/client.key
	I0911 12:07:38.531650 2255187 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.key.8e4e34e1
	I0911 12:07:38.531705 2255187 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.key
	I0911 12:07:38.531860 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:07:38.531918 2255187 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:07:38.531933 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:07:38.531976 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:07:38.532020 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:07:38.532071 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:07:38.532140 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:38.532870 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:07:38.558426 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0911 12:07:38.582526 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:07:38.606798 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:07:38.630691 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:07:38.655580 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:07:38.682355 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:07:38.707701 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:07:38.732346 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:07:38.757688 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:07:38.783458 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:07:38.808481 2255187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:07:38.825822 2255187 ssh_runner.go:195] Run: openssl version
	I0911 12:07:38.831897 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:07:38.842170 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.847385 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.847467 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.853456 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:07:38.864049 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:07:38.874236 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.879391 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.879463 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.885352 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:07:38.895225 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:07:38.905599 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.910660 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.910748 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.916920 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:07:38.927096 2255187 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:07:38.932313 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:07:38.939081 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:07:38.946028 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:07:38.952644 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:07:38.959391 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:07:38.965871 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:07:38.972698 2255187 kubeadm.go:404] StartCluster: {Name:embed-certs-235462 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-235462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:07:38.972838 2255187 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:07:38.972906 2255187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:07:39.006683 2255187 cri.go:89] found id: ""
	I0911 12:07:39.006780 2255187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:07:39.017143 2255187 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:07:39.017173 2255187 kubeadm.go:636] restartCluster start
	I0911 12:07:39.017256 2255187 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:07:39.029483 2255187 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.031111 2255187 kubeconfig.go:92] found "embed-certs-235462" server: "https://192.168.50.96:8443"
	I0911 12:07:39.034708 2255187 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:07:39.046851 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.046919 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.058732 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.058756 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.058816 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.070011 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.570811 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.570945 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.583538 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:40.071137 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:40.071264 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:40.083997 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:40.570532 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:40.570646 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:40.583202 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:41.070241 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:41.070369 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:41.082992 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:41.570284 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:41.570420 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:41.582669 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:42.070231 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:42.070341 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:42.086964 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:42.570487 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:42.570592 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:42.582618 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.411715 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:39.412168 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:39.412203 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:39.412129 2256281 retry.go:31] will retry after 2.048771803s: waiting for machine to come up
	I0911 12:07:41.463672 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:41.464124 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:41.464160 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:41.464061 2256281 retry.go:31] will retry after 2.459765131s: waiting for machine to come up
	I0911 12:07:43.071070 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:43.071249 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:43.087309 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:43.570993 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:43.571105 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:43.586884 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:44.070402 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:44.070525 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:44.082541 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:44.571170 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:44.571303 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:44.583295 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:45.070902 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:45.071002 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:45.087666 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:45.570274 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:45.570400 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:45.587352 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:46.070596 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:46.070729 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:46.082939 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:46.570445 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:46.570559 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:46.582782 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:47.070351 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:47.070485 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:47.082518 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:47.571060 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:47.571155 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:47.583891 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:43.926561 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:43.926941 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:43.926983 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:43.926918 2256281 retry.go:31] will retry after 2.467825155s: waiting for machine to come up
	I0911 12:07:46.396258 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:46.396703 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:46.396736 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:46.396622 2256281 retry.go:31] will retry after 3.885293775s: waiting for machine to come up
	I0911 12:07:48.070904 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:48.070994 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:48.083706 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:48.570268 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:48.570404 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:48.582255 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:49.047880 2255187 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:07:49.047929 2255187 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:07:49.047951 2255187 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:07:49.048052 2255187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:07:49.081907 2255187 cri.go:89] found id: ""
	I0911 12:07:49.082024 2255187 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:07:49.099563 2255187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:07:49.109373 2255187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:07:49.109450 2255187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:07:49.119162 2255187 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:07:49.119210 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:49.251091 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:49.995928 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.192421 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.288496 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.365849 2255187 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:07:50.365943 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.383262 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.901757 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:51.401967 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:51.901613 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:52.402067 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.285991 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:50.286515 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:50.286547 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:50.286433 2256281 retry.go:31] will retry after 3.948880306s: waiting for machine to come up
	I0911 12:07:55.614569 2255814 start.go:369] acquired machines lock for "default-k8s-diff-port-484027" in 2m57.464444695s
	I0911 12:07:55.614642 2255814 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:55.614662 2255814 fix.go:54] fixHost starting: 
	I0911 12:07:55.615164 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:55.615208 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:55.635996 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I0911 12:07:55.636556 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:55.637268 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:07:55.637295 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:55.637758 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:55.638000 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:07:55.638191 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:07:55.640059 2255814 fix.go:102] recreateIfNeeded on default-k8s-diff-port-484027: state=Stopped err=<nil>
	I0911 12:07:55.640086 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	W0911 12:07:55.640254 2255814 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:55.643100 2255814 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-484027" ...
	I0911 12:07:54.236661 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.237200 2255304 main.go:141] libmachine: (old-k8s-version-642215) Found IP for machine: 192.168.61.58
	I0911 12:07:54.237226 2255304 main.go:141] libmachine: (old-k8s-version-642215) Reserving static IP address...
	I0911 12:07:54.237241 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has current primary IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.237676 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "old-k8s-version-642215", mac: "52:54:00:4e:60:8b", ip: "192.168.61.58"} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.237717 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | skip adding static IP to network mk-old-k8s-version-642215 - found existing host DHCP lease matching {name: "old-k8s-version-642215", mac: "52:54:00:4e:60:8b", ip: "192.168.61.58"}
	I0911 12:07:54.237736 2255304 main.go:141] libmachine: (old-k8s-version-642215) Reserved static IP address: 192.168.61.58
	I0911 12:07:54.237756 2255304 main.go:141] libmachine: (old-k8s-version-642215) Waiting for SSH to be available...
	I0911 12:07:54.237773 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Getting to WaitForSSH function...
	I0911 12:07:54.240007 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.240469 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.240521 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.240610 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Using SSH client type: external
	I0911 12:07:54.240642 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa (-rw-------)
	I0911 12:07:54.240679 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:07:54.240700 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | About to run SSH command:
	I0911 12:07:54.240715 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | exit 0
	I0911 12:07:54.337416 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | SSH cmd err, output: <nil>: 
	I0911 12:07:54.337857 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetConfigRaw
	I0911 12:07:54.338666 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:54.341640 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.341973 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.342025 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.342296 2255304 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/config.json ...
	I0911 12:07:54.342549 2255304 machine.go:88] provisioning docker machine ...
	I0911 12:07:54.342573 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:54.342809 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.342965 2255304 buildroot.go:166] provisioning hostname "old-k8s-version-642215"
	I0911 12:07:54.342986 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.343133 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.345466 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.345848 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.345881 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.346024 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.346214 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.346366 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.346491 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.346713 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.347165 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.347184 2255304 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642215 && echo "old-k8s-version-642215" | sudo tee /etc/hostname
	I0911 12:07:54.487005 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642215
	
	I0911 12:07:54.487058 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.489843 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.490146 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.490175 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.490378 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.490603 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.490774 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.490931 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.491146 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.491586 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.491612 2255304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642215/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:07:54.631441 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:54.631474 2255304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:07:54.631500 2255304 buildroot.go:174] setting up certificates
	I0911 12:07:54.631513 2255304 provision.go:83] configureAuth start
	I0911 12:07:54.631525 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.631988 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:54.634992 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.635411 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.635448 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.635700 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.638219 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.638608 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.638646 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.638788 2255304 provision.go:138] copyHostCerts
	I0911 12:07:54.638870 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:07:54.638881 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:07:54.638957 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:07:54.639087 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:07:54.639099 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:07:54.639128 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:07:54.639278 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:07:54.639293 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:07:54.639322 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:07:54.639405 2255304 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642215 san=[192.168.61.58 192.168.61.58 localhost 127.0.0.1 minikube old-k8s-version-642215]
	I0911 12:07:54.792963 2255304 provision.go:172] copyRemoteCerts
	I0911 12:07:54.793027 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:07:54.793056 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.796196 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.796555 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.796592 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.796884 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.797124 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.797410 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.797620 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:54.895690 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0911 12:07:54.923392 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 12:07:54.951276 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:07:54.979345 2255304 provision.go:86] duration metric: configureAuth took 347.814948ms
	I0911 12:07:54.979383 2255304 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:07:54.979690 2255304 config.go:182] Loaded profile config "old-k8s-version-642215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0911 12:07:54.979805 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.982955 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.983405 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.983448 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.983618 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.983822 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.984020 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.984190 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.984377 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.984924 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.984948 2255304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:07:55.330958 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:07:55.330995 2255304 machine.go:91] provisioned docker machine in 988.429681ms
	I0911 12:07:55.331008 2255304 start.go:300] post-start starting for "old-k8s-version-642215" (driver="kvm2")
	I0911 12:07:55.331021 2255304 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:07:55.331049 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.331490 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:07:55.331536 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.334936 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.335425 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.335467 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.335645 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.335902 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.336075 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.336290 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.439126 2255304 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:07:55.445330 2255304 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:07:55.445370 2255304 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:07:55.445453 2255304 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:07:55.445564 2255304 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:07:55.445692 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:07:55.455235 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:55.480979 2255304 start.go:303] post-start completed in 149.950869ms
	I0911 12:07:55.481014 2255304 fix.go:56] fixHost completed within 23.694753941s
	I0911 12:07:55.481046 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.484222 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.484612 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.484647 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.484879 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.485159 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.485352 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.485527 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.485696 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:55.486109 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:55.486122 2255304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:07:55.614312 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434075.554093051
	
	I0911 12:07:55.614344 2255304 fix.go:206] guest clock: 1694434075.554093051
	I0911 12:07:55.614355 2255304 fix.go:219] Guest: 2023-09-11 12:07:55.554093051 +0000 UTC Remote: 2023-09-11 12:07:55.481020512 +0000 UTC m=+302.412352865 (delta=73.072539ms)
	I0911 12:07:55.614409 2255304 fix.go:190] guest clock delta is within tolerance: 73.072539ms
	I0911 12:07:55.614423 2255304 start.go:83] releasing machines lock for "old-k8s-version-642215", held for 23.828210342s
	I0911 12:07:55.614465 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.614816 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:55.617993 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.618444 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.618489 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.618674 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619275 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619473 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619611 2255304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:07:55.619674 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.619732 2255304 ssh_runner.go:195] Run: cat /version.json
	I0911 12:07:55.619767 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.622428 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.622846 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.622873 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.622894 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.623012 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.623191 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.623279 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.623302 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.623399 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.623465 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.623543 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.623615 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.623747 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.623891 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.742462 2255304 ssh_runner.go:195] Run: systemctl --version
	I0911 12:07:55.748982 2255304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:07:55.906639 2255304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:07:55.914088 2255304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:07:55.914183 2255304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:07:55.938200 2255304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:07:55.938240 2255304 start.go:466] detecting cgroup driver to use...
	I0911 12:07:55.938333 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:07:55.965549 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:07:55.986227 2255304 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:07:55.986308 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:07:56.003370 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:07:56.025702 2255304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:07:56.158835 2255304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:07:56.311687 2255304 docker.go:212] disabling docker service ...
	I0911 12:07:56.311770 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:07:56.337492 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:07:56.355858 2255304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:07:56.486823 2255304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:07:56.617414 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:07:56.634057 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:07:56.658242 2255304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0911 12:07:56.658370 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.670146 2255304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:07:56.670252 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.681790 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.695832 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.707434 2255304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:07:56.718631 2255304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:07:56.729355 2255304 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:07:56.729436 2255304 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:07:56.744591 2255304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:07:56.755374 2255304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:07:56.906693 2255304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:07:57.131296 2255304 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:07:57.131439 2255304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:07:57.137554 2255304 start.go:534] Will wait 60s for crictl version
	I0911 12:07:57.137645 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:07:57.141720 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:07:57.178003 2255304 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:07:57.178110 2255304 ssh_runner.go:195] Run: crio --version
	I0911 12:07:57.236871 2255304 ssh_runner.go:195] Run: crio --version
	I0911 12:07:57.303639 2255304 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0911 12:07:52.901170 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:53.401940 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:53.430776 2255187 api_server.go:72] duration metric: took 3.064926262s to wait for apiserver process to appear ...
	I0911 12:07:53.430809 2255187 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:07:53.430837 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:53.431478 2255187 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0911 12:07:53.431528 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:53.431982 2255187 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0911 12:07:53.932765 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.216903 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:07:56.216947 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:07:56.216964 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.322957 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:07:56.322994 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:07:56.432419 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.444961 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:07:56.445016 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:07:56.932209 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.942202 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:07:56.942242 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:07:57.432361 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:57.440671 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 200:
	ok
	I0911 12:07:57.453348 2255187 api_server.go:141] control plane version: v1.28.1
	I0911 12:07:57.453393 2255187 api_server.go:131] duration metric: took 4.0225758s to wait for apiserver health ...
	I0911 12:07:57.453408 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:07:57.453418 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:07:57.455939 2255187 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:07:57.457968 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:07:57.488156 2255187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:07:57.524742 2255187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:07:57.543532 2255187 system_pods.go:59] 8 kube-system pods found
	I0911 12:07:57.543601 2255187 system_pods.go:61] "coredns-5dd5756b68-pkzcf" [4a44c7ec-bb5b-40f0-8d44-d5b77666cb95] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:07:57.543616 2255187 system_pods.go:61] "etcd-embed-certs-235462" [c14f9910-0d1d-4494-9ebe-97173ab9abe9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:07:57.543671 2255187 system_pods.go:61] "kube-apiserver-embed-certs-235462" [4d95f49f-f9ad-40ce-9101-7e67ad978353] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:07:57.543686 2255187 system_pods.go:61] "kube-controller-manager-embed-certs-235462" [753eea69-23f4-46f8-b631-36cf0f34d663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:07:57.543701 2255187 system_pods.go:61] "kube-proxy-v24dz" [e527b198-cf8f-4ada-af22-7979b249efd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:07:57.543711 2255187 system_pods.go:61] "kube-scheduler-embed-certs-235462" [b092d336-c45d-4b2c-87a5-df253a5fddbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:07:57.543722 2255187 system_pods.go:61] "metrics-server-57f55c9bc5-ldjwn" [4761a51f-8912-4be4-aa1d-2574e10da791] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:07:57.543735 2255187 system_pods.go:61] "storage-provisioner" [810336ff-14a1-4b3d-a4ff-2569f3710bab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:07:57.543744 2255187 system_pods.go:74] duration metric: took 18.975758ms to wait for pod list to return data ...
	I0911 12:07:57.543770 2255187 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:07:57.550468 2255187 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:07:57.550512 2255187 node_conditions.go:123] node cpu capacity is 2
	I0911 12:07:57.550527 2255187 node_conditions.go:105] duration metric: took 6.741621ms to run NodePressure ...
	I0911 12:07:57.550552 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:55.644857 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Start
	I0911 12:07:55.645094 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring networks are active...
	I0911 12:07:55.646010 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring network default is active
	I0911 12:07:55.646393 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring network mk-default-k8s-diff-port-484027 is active
	I0911 12:07:55.646808 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Getting domain xml...
	I0911 12:07:55.647513 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Creating domain...
	I0911 12:07:57.083879 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting to get IP...
	I0911 12:07:57.084769 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.085290 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.085361 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.085279 2256448 retry.go:31] will retry after 226.596764ms: waiting for machine to come up
	I0911 12:07:57.313593 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.314083 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.314106 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.314029 2256448 retry.go:31] will retry after 315.605673ms: waiting for machine to come up
	I0911 12:07:57.631774 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.632292 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.632329 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.632179 2256448 retry.go:31] will retry after 400.211275ms: waiting for machine to come up
	I0911 12:07:58.034189 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.305610 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:57.309276 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:57.309677 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:57.309721 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:57.310066 2255304 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0911 12:07:57.316611 2255304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:57.335580 2255304 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0911 12:07:57.335689 2255304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:57.380592 2255304 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0911 12:07:57.380690 2255304 ssh_runner.go:195] Run: which lz4
	I0911 12:07:57.386023 2255304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:07:57.391807 2255304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:07:57.391861 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0911 12:07:58.002314 2255187 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:07:58.010948 2255187 kubeadm.go:787] kubelet initialised
	I0911 12:07:58.010981 2255187 kubeadm.go:788] duration metric: took 8.627903ms waiting for restarted kubelet to initialise ...
	I0911 12:07:58.010993 2255187 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:07:58.020253 2255187 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.027844 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.027876 2255187 pod_ready.go:81] duration metric: took 7.583678ms waiting for pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.027888 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.027900 2255187 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.050283 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "etcd-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.050321 2255187 pod_ready.go:81] duration metric: took 22.413628ms waiting for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.050352 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "etcd-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.050369 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.060314 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.060348 2255187 pod_ready.go:81] duration metric: took 9.962502ms waiting for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.060360 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.060371 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.069122 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.069152 2255187 pod_ready.go:81] duration metric: took 8.771982ms waiting for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.069164 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.069176 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v24dz" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:59.329758 2255187 pod_ready.go:92] pod "kube-proxy-v24dz" in "kube-system" namespace has status "Ready":"True"
	I0911 12:07:59.329789 2255187 pod_ready.go:81] duration metric: took 1.260592229s waiting for pod "kube-proxy-v24dz" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:59.329804 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:01.526483 2255187 pod_ready.go:102] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"False"
	I0911 12:07:58.034838 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.037141 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:58.034724 2256448 retry.go:31] will retry after 394.484585ms: waiting for machine to come up
	I0911 12:07:58.431365 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.431982 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.432004 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:58.431886 2256448 retry.go:31] will retry after 593.506569ms: waiting for machine to come up
	I0911 12:07:59.026841 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.027490 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.027518 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:59.027389 2256448 retry.go:31] will retry after 666.166785ms: waiting for machine to come up
	I0911 12:07:59.694652 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.695161 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.695191 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:59.695113 2256448 retry.go:31] will retry after 975.320046ms: waiting for machine to come up
	I0911 12:08:00.672258 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:00.672804 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:00.672851 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:00.672755 2256448 retry.go:31] will retry after 1.161656415s: waiting for machine to come up
	I0911 12:08:01.835653 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:01.836186 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:01.836223 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:01.836130 2256448 retry.go:31] will retry after 1.505608393s: waiting for machine to come up
	I0911 12:07:59.503695 2255304 crio.go:444] Took 2.117718 seconds to copy over tarball
	I0911 12:07:59.503800 2255304 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:08:02.939001 2255304 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.435164165s)
	I0911 12:08:02.939037 2255304 crio.go:451] Took 3.435307 seconds to extract the tarball
	I0911 12:08:02.939050 2255304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:08:02.984446 2255304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:03.037419 2255304 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0911 12:08:03.037452 2255304 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 12:08:03.037546 2255304 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.037582 2255304 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.037597 2255304 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.037628 2255304 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.037583 2255304 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.037607 2255304 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0911 12:08:03.037551 2255304 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.037549 2255304 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.039413 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.039639 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.039650 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.039650 2255304 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.039819 2255304 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.039854 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.040031 2255304 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.040241 2255304 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0911 12:08:03.815561 2255187 pod_ready.go:102] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:04.614171 2255187 pod_ready.go:92] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:04.614199 2255187 pod_ready.go:81] duration metric: took 5.28438743s waiting for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:04.614211 2255187 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:06.638688 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:03.343936 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:03.353931 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:03.353970 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:03.344315 2256448 retry.go:31] will retry after 1.414606279s: waiting for machine to come up
	I0911 12:08:04.761183 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:04.761667 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:04.761695 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:04.761607 2256448 retry.go:31] will retry after 1.846261641s: waiting for machine to come up
	I0911 12:08:06.609258 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:06.609917 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:06.609965 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:06.609851 2256448 retry.go:31] will retry after 2.938814697s: waiting for machine to come up
	I0911 12:08:03.225129 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.227566 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.231565 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.233817 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.239841 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0911 12:08:03.243250 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.247155 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.522779 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.711354 2255304 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0911 12:08:03.711381 2255304 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0911 12:08:03.711438 2255304 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0911 12:08:03.711473 2255304 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.711501 2255304 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0911 12:08:03.711514 2255304 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0911 12:08:03.711530 2255304 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0911 12:08:03.711602 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711641 2255304 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0911 12:08:03.711678 2255304 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.711735 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711536 2255304 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.711823 2255304 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0911 12:08:03.711854 2255304 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.711856 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711894 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711475 2255304 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.711934 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711541 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711474 2255304 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.712005 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.823116 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0911 12:08:03.823136 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.823232 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.823349 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.823374 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.823429 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.823499 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.957383 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0911 12:08:03.957459 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0911 12:08:03.957513 2255304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0911 12:08:03.957521 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0911 12:08:03.957564 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0911 12:08:03.957649 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0911 12:08:03.957707 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0911 12:08:03.957743 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0911 12:08:03.962841 2255304 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0911 12:08:03.962863 2255304 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0911 12:08:03.962905 2255304 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0911 12:08:05.018464 2255304 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.055478429s)
	I0911 12:08:05.018510 2255304 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0911 12:08:05.018571 2255304 cache_images.go:92] LoadImages completed in 1.981102195s
	W0911 12:08:05.018661 2255304 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0911 12:08:05.018747 2255304 ssh_runner.go:195] Run: crio config
	I0911 12:08:05.107550 2255304 cni.go:84] Creating CNI manager for ""
	I0911 12:08:05.107585 2255304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:05.107614 2255304 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:08:05.107641 2255304 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.58 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642215 NodeName:old-k8s-version-642215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0911 12:08:05.107908 2255304 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642215"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-642215
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.58:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:08:05.108027 2255304 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642215 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-642215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:08:05.108106 2255304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0911 12:08:05.120210 2255304 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:08:05.120311 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:08:05.129517 2255304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0911 12:08:05.151855 2255304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:08:05.169543 2255304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0911 12:08:05.190304 2255304 ssh_runner.go:195] Run: grep 192.168.61.58	control-plane.minikube.internal$ /etc/hosts
	I0911 12:08:05.196014 2255304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:05.211627 2255304 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215 for IP: 192.168.61.58
	I0911 12:08:05.211663 2255304 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:05.211876 2255304 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:08:05.211943 2255304 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:08:05.212043 2255304 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/client.key
	I0911 12:08:05.212130 2255304 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.key.7152e027
	I0911 12:08:05.212217 2255304 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.key
	I0911 12:08:05.212397 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:08:05.212451 2255304 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:08:05.212467 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:08:05.212500 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:08:05.212531 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:08:05.212568 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:08:05.212637 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:05.213373 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:08:05.242362 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 12:08:05.272949 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:08:05.299359 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:08:05.326203 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:08:05.354388 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:08:05.385150 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:08:05.415683 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:08:05.449119 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:08:05.476397 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:08:05.503652 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:08:05.531520 2255304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:08:05.550108 2255304 ssh_runner.go:195] Run: openssl version
	I0911 12:08:05.556982 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:08:05.569083 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.574490 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.574570 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.581479 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:08:05.596824 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:08:05.607900 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.613627 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.613711 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.620309 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:08:05.630995 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:08:05.645786 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.652682 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.652773 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.660784 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:08:05.675417 2255304 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:08:05.681969 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:08:05.690345 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:08:05.697454 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:08:05.706283 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:08:05.712913 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:08:05.719308 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:08:05.726307 2255304 kubeadm.go:404] StartCluster: {Name:old-k8s-version-642215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-642215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:08:05.726414 2255304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:08:05.726478 2255304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:05.765092 2255304 cri.go:89] found id: ""
	I0911 12:08:05.765172 2255304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:08:05.775654 2255304 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:08:05.775681 2255304 kubeadm.go:636] restartCluster start
	I0911 12:08:05.775749 2255304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:08:05.785235 2255304 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:05.786289 2255304 kubeconfig.go:92] found "old-k8s-version-642215" server: "https://192.168.61.58:8443"
	I0911 12:08:05.789768 2255304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:08:05.799009 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:05.799092 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:05.811208 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:05.811235 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:05.811301 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:05.822223 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:06.322909 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:06.323053 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:06.337866 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:06.823220 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:06.823328 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:06.839573 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:07.323145 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:07.323245 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:07.335054 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:07.822427 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:07.822536 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:07.834385 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.146768 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:11.637314 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:09.552075 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:09.552494 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:09.552520 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:09.552442 2256448 retry.go:31] will retry after 3.623277093s: waiting for machine to come up
	I0911 12:08:08.323215 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:08.323301 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:08.335501 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:08.822942 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:08.823061 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:08.840055 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.322586 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:09.322692 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:09.338101 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.822702 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:09.822845 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:09.835245 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:10.322666 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:10.322750 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:10.337101 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:10.822530 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:10.822662 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:10.838511 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:11.323206 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:11.323329 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:11.338239 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:11.822952 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:11.823044 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:11.838752 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:12.323296 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:12.323384 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:12.335174 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:12.822659 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:12.822775 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:12.834762 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:13.637784 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:16.138584 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:13.178553 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:13.179008 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:13.179041 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:13.178961 2256448 retry.go:31] will retry after 3.636806595s: waiting for machine to come up
	I0911 12:08:16.818087 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.818548 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has current primary IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.818583 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Found IP for machine: 192.168.39.230
	I0911 12:08:16.818600 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Reserving static IP address...
	I0911 12:08:16.819118 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-484027", mac: "52:54:00:b1:16:75", ip: "192.168.39.230"} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.819156 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Reserved static IP address: 192.168.39.230
	I0911 12:08:16.819182 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | skip adding static IP to network mk-default-k8s-diff-port-484027 - found existing host DHCP lease matching {name: "default-k8s-diff-port-484027", mac: "52:54:00:b1:16:75", ip: "192.168.39.230"}
	I0911 12:08:16.819204 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Getting to WaitForSSH function...
	I0911 12:08:16.819221 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for SSH to be available...
	I0911 12:08:16.821746 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.822235 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.822270 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.822454 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Using SSH client type: external
	I0911 12:08:16.822500 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa (-rw-------)
	I0911 12:08:16.822551 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:08:16.822576 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | About to run SSH command:
	I0911 12:08:16.822590 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | exit 0
	I0911 12:08:16.957464 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | SSH cmd err, output: <nil>: 
	I0911 12:08:16.957845 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetConfigRaw
	I0911 12:08:16.958573 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:16.961262 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.961726 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.961762 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.962073 2255814 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/config.json ...
	I0911 12:08:16.962281 2255814 machine.go:88] provisioning docker machine ...
	I0911 12:08:16.962301 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:16.962594 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:16.962777 2255814 buildroot.go:166] provisioning hostname "default-k8s-diff-port-484027"
	I0911 12:08:16.962799 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:16.962971 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:16.965571 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.966095 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.966134 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.966313 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:16.966531 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:16.966685 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:16.966837 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:16.967021 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:16.967739 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:16.967764 2255814 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-484027 && echo "default-k8s-diff-port-484027" | sudo tee /etc/hostname
	I0911 12:08:17.106967 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-484027
	
	I0911 12:08:17.107036 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.110243 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.110663 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.110737 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.110953 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.111197 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.111388 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.111526 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.111782 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:17.112200 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:17.112223 2255814 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-484027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-484027/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-484027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:08:17.238410 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:08:17.238450 2255814 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:08:17.238508 2255814 buildroot.go:174] setting up certificates
	I0911 12:08:17.238520 2255814 provision.go:83] configureAuth start
	I0911 12:08:17.238536 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:17.238938 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:17.241635 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.242044 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.242106 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.242209 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.244737 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.245093 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.245117 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.245295 2255814 provision.go:138] copyHostCerts
	I0911 12:08:17.245360 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:08:17.245375 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:08:17.245434 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:08:17.245530 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:08:17.245537 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:08:17.245557 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:08:17.245627 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:08:17.245634 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:08:17.245651 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:08:17.245708 2255814 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-484027 san=[192.168.39.230 192.168.39.230 localhost 127.0.0.1 minikube default-k8s-diff-port-484027]
	I0911 12:08:17.540142 2255814 provision.go:172] copyRemoteCerts
	I0911 12:08:17.540233 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:08:17.540270 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.543823 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.544237 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.544277 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.544485 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.544706 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.544916 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.545060 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:17.645425 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:08:17.675288 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0911 12:08:17.703043 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 12:08:17.732683 2255814 provision.go:86] duration metric: configureAuth took 494.12506ms
	I0911 12:08:17.732713 2255814 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:08:17.732955 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:17.733076 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.736740 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.737204 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.737244 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.737464 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.737707 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.737914 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.738084 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.738324 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:17.738749 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:17.738774 2255814 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:08:13.323070 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:13.323174 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:13.334828 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:13.822403 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:13.822490 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:13.834374 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:14.323004 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:14.323100 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:14.334774 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:14.822351 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:14.822465 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:14.834368 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:15.323045 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:15.323154 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:15.334863 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:15.799700 2255304 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:08:15.799736 2255304 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:08:15.799751 2255304 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:08:15.799821 2255304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:15.831051 2255304 cri.go:89] found id: ""
	I0911 12:08:15.831140 2255304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:08:15.847072 2255304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:08:15.856353 2255304 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:08:15.856425 2255304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:15.865711 2255304 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:15.865740 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:15.990047 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.312314 2255304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.322225408s)
	I0911 12:08:17.312354 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.521733 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.627343 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.723857 2255304 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:17.723964 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:17.742688 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:18.336038 2255048 start.go:369] acquired machines lock for "no-preload-352076" in 1m2.388468349s
	I0911 12:08:18.336100 2255048 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:08:18.336125 2255048 fix.go:54] fixHost starting: 
	I0911 12:08:18.336615 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:18.336663 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:18.355715 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0911 12:08:18.356243 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:18.356901 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:08:18.356931 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:18.357385 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:18.357585 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:18.357787 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:08:18.359541 2255048 fix.go:102] recreateIfNeeded on no-preload-352076: state=Stopped err=<nil>
	I0911 12:08:18.359571 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	W0911 12:08:18.359750 2255048 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:08:18.361628 2255048 out.go:177] * Restarting existing kvm2 VM for "no-preload-352076" ...
	I0911 12:08:18.363286 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Start
	I0911 12:08:18.363532 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring networks are active...
	I0911 12:08:18.364515 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring network default is active
	I0911 12:08:18.364894 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring network mk-no-preload-352076 is active
	I0911 12:08:18.365345 2255048 main.go:141] libmachine: (no-preload-352076) Getting domain xml...
	I0911 12:08:18.366191 2255048 main.go:141] libmachine: (no-preload-352076) Creating domain...
	I0911 12:08:18.078952 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:08:18.078979 2255814 machine.go:91] provisioned docker machine in 1.116684764s
	I0911 12:08:18.078991 2255814 start.go:300] post-start starting for "default-k8s-diff-port-484027" (driver="kvm2")
	I0911 12:08:18.079011 2255814 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:08:18.079057 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.079482 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:08:18.079520 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.082212 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.082641 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.082674 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.082810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.083043 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.083227 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.083403 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.170810 2255814 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:08:18.175342 2255814 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:08:18.175370 2255814 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:08:18.175457 2255814 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:08:18.175583 2255814 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:08:18.175722 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:08:18.184543 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:18.209487 2255814 start.go:303] post-start completed in 130.475291ms
	I0911 12:08:18.209516 2255814 fix.go:56] fixHost completed within 22.594854569s
	I0911 12:08:18.209540 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.212339 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.212779 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.212832 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.212967 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.213187 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.213366 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.213515 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.213680 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:18.214071 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:18.214083 2255814 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:08:18.335862 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434098.277311369
	
	I0911 12:08:18.335893 2255814 fix.go:206] guest clock: 1694434098.277311369
	I0911 12:08:18.335902 2255814 fix.go:219] Guest: 2023-09-11 12:08:18.277311369 +0000 UTC Remote: 2023-09-11 12:08:18.20951981 +0000 UTC m=+200.212950109 (delta=67.791559ms)
	I0911 12:08:18.335925 2255814 fix.go:190] guest clock delta is within tolerance: 67.791559ms
	I0911 12:08:18.335932 2255814 start.go:83] releasing machines lock for "default-k8s-diff-port-484027", held for 22.721324127s
	I0911 12:08:18.335977 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.336342 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:18.339935 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.340372 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.340411 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.340801 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341526 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341746 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341832 2255814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:08:18.341895 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.342153 2255814 ssh_runner.go:195] Run: cat /version.json
	I0911 12:08:18.342219 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.345331 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.345619 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.345716 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.345751 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.346068 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.346282 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.346367 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.346409 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.346443 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.346624 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.346803 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.346960 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.347119 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.347284 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.455877 2255814 ssh_runner.go:195] Run: systemctl --version
	I0911 12:08:18.463787 2255814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:08:18.620444 2255814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:08:18.628878 2255814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:08:18.628972 2255814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:08:18.652267 2255814 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:08:18.652301 2255814 start.go:466] detecting cgroup driver to use...
	I0911 12:08:18.652381 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:08:18.672306 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:08:18.690514 2255814 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:08:18.690594 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:08:18.709032 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:08:18.727521 2255814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:08:18.859864 2255814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:08:19.005708 2255814 docker.go:212] disabling docker service ...
	I0911 12:08:19.005809 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:08:19.026177 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:08:19.043931 2255814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:08:19.184060 2255814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:08:19.305184 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:08:19.326550 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:08:19.351313 2255814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:08:19.351400 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.366747 2255814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:08:19.366836 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.382272 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.395743 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.408786 2255814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:08:19.424229 2255814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:08:19.438367 2255814 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:08:19.438450 2255814 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:08:19.457417 2255814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:08:19.470001 2255814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:08:19.629977 2255814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:08:19.846900 2255814 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:08:19.846994 2255814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:08:19.854282 2255814 start.go:534] Will wait 60s for crictl version
	I0911 12:08:19.854378 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:08:19.859252 2255814 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:08:19.897263 2255814 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:08:19.897349 2255814 ssh_runner.go:195] Run: crio --version
	I0911 12:08:19.966155 2255814 ssh_runner.go:195] Run: crio --version
	I0911 12:08:20.024697 2255814 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:08:18.639188 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:20.649395 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:20.026156 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:20.029726 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:20.030249 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:20.030286 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:20.030572 2255814 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 12:08:20.035523 2255814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:20.053903 2255814 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:08:20.053997 2255814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:20.096570 2255814 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:08:20.096666 2255814 ssh_runner.go:195] Run: which lz4
	I0911 12:08:20.102350 2255814 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:08:20.107338 2255814 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:08:20.107385 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:08:22.215033 2255814 crio.go:444] Took 2.112735 seconds to copy over tarball
	I0911 12:08:22.215168 2255814 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:08:18.262191 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:18.762029 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:19.262094 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:19.316271 2255304 api_server.go:72] duration metric: took 1.592409696s to wait for apiserver process to appear ...
	I0911 12:08:19.316309 2255304 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:19.316329 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:19.892254 2255048 main.go:141] libmachine: (no-preload-352076) Waiting to get IP...
	I0911 12:08:19.893353 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:19.893857 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:19.893939 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:19.893867 2256639 retry.go:31] will retry after 256.490953ms: waiting for machine to come up
	I0911 12:08:20.152717 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.153686 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.153718 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.153662 2256639 retry.go:31] will retry after 308.528476ms: waiting for machine to come up
	I0911 12:08:20.464569 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.465179 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.465240 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.465150 2256639 retry.go:31] will retry after 329.79495ms: waiting for machine to come up
	I0911 12:08:20.797010 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.797581 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.797615 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.797512 2256639 retry.go:31] will retry after 388.108578ms: waiting for machine to come up
	I0911 12:08:21.187304 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:21.187980 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:21.188006 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:21.187878 2256639 retry.go:31] will retry after 547.488463ms: waiting for machine to come up
	I0911 12:08:21.736835 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:21.737425 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:21.737466 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:21.737352 2256639 retry.go:31] will retry after 669.118316ms: waiting for machine to come up
	I0911 12:08:22.407727 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:22.408435 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:22.408471 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:22.408353 2256639 retry.go:31] will retry after 986.70059ms: waiting for machine to come up
	I0911 12:08:23.139403 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:25.141299 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:27.493149 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:25.680145 2255814 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.464917771s)
	I0911 12:08:25.680187 2255814 crio.go:451] Took 3.465097 seconds to extract the tarball
	I0911 12:08:25.680201 2255814 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:08:25.721940 2255814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:25.770149 2255814 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:08:25.770189 2255814 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:08:25.770296 2255814 ssh_runner.go:195] Run: crio config
	I0911 12:08:25.844108 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:08:25.844142 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:25.844170 2255814 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:08:25.844197 2255814 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-484027 NodeName:default-k8s-diff-port-484027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:08:25.844471 2255814 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-484027"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:08:25.844584 2255814 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-484027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0911 12:08:25.844751 2255814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:08:25.855558 2255814 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:08:25.855658 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:08:25.865531 2255814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0911 12:08:25.890631 2255814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:08:25.914304 2255814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0911 12:08:25.938065 2255814 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0911 12:08:25.943138 2255814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:25.963689 2255814 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027 for IP: 192.168.39.230
	I0911 12:08:25.963744 2255814 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:25.963968 2255814 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:08:25.964026 2255814 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:08:25.964139 2255814 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.key
	I0911 12:08:25.964245 2255814 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.key.165d62e4
	I0911 12:08:25.964309 2255814 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.key
	I0911 12:08:25.964546 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:08:25.964599 2255814 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:08:25.964618 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:08:25.964655 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:08:25.964699 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:08:25.964731 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:08:25.964805 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:25.965758 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:08:26.001391 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 12:08:26.032345 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:08:26.065593 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:08:26.100792 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:08:26.135603 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:08:26.170029 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:08:26.203119 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:08:26.232040 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:08:26.262353 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:08:26.292733 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:08:26.326750 2255814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:08:26.346334 2255814 ssh_runner.go:195] Run: openssl version
	I0911 12:08:26.353175 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:08:26.365742 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.372007 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.372086 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.378954 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:08:26.390365 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:08:26.403147 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.410930 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.411048 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.419889 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:08:26.433366 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:08:26.445752 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.452481 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.452563 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.461097 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:08:26.477855 2255814 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:08:26.483947 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:08:26.492879 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:08:26.501391 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:08:26.510124 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:08:26.518732 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:08:26.527356 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:08:26.536063 2255814 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:08:26.536225 2255814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:08:26.536300 2255814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:26.575522 2255814 cri.go:89] found id: ""
	I0911 12:08:26.575617 2255814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:08:26.586011 2255814 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:08:26.586043 2255814 kubeadm.go:636] restartCluster start
	I0911 12:08:26.586114 2255814 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:08:26.596758 2255814 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:26.598534 2255814 kubeconfig.go:92] found "default-k8s-diff-port-484027" server: "https://192.168.39.230:8444"
	I0911 12:08:26.603031 2255814 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:08:26.617921 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:26.618066 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:26.632719 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:26.632739 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:26.632793 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:26.650036 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:27.150299 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:27.150397 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:27.165783 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:27.650311 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:27.650416 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:27.665184 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:24.317268 2255304 api_server.go:269] stopped: https://192.168.61.58:8443/healthz: Get "https://192.168.61.58:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0911 12:08:24.317328 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:26.742901 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:26.742942 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:27.243118 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:27.654196 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0911 12:08:27.654260 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0911 12:08:27.743438 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:27.767557 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0911 12:08:27.767607 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0911 12:08:28.243610 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:28.251858 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 200:
	ok
	I0911 12:08:28.262619 2255304 api_server.go:141] control plane version: v1.16.0
	I0911 12:08:28.262659 2255304 api_server.go:131] duration metric: took 8.946341912s to wait for apiserver health ...
	I0911 12:08:28.262670 2255304 cni.go:84] Creating CNI manager for ""
	I0911 12:08:28.262676 2255304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:28.264705 2255304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:08:23.396798 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:23.398997 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:23.399029 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:23.397251 2256639 retry.go:31] will retry after 1.384367074s: waiting for machine to come up
	I0911 12:08:24.783036 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:24.783547 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:24.783584 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:24.783489 2256639 retry.go:31] will retry after 1.172643107s: waiting for machine to come up
	I0911 12:08:25.958217 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:25.958989 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:25.959024 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:25.958929 2256639 retry.go:31] will retry after 2.243377044s: waiting for machine to come up
	I0911 12:08:28.205538 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:28.206196 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:28.206226 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:28.206137 2256639 retry.go:31] will retry after 1.83460511s: waiting for machine to come up
	I0911 12:08:28.266346 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:08:28.280404 2255304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:08:28.308228 2255304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:28.317951 2255304 system_pods.go:59] 7 kube-system pods found
	I0911 12:08:28.317994 2255304 system_pods.go:61] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:28.318002 2255304 system_pods.go:61] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:28.318010 2255304 system_pods.go:61] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:28.318024 2255304 system_pods.go:61] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Pending
	I0911 12:08:28.318030 2255304 system_pods.go:61] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:28.318035 2255304 system_pods.go:61] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:28.318039 2255304 system_pods.go:61] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:28.318045 2255304 system_pods.go:74] duration metric: took 9.788007ms to wait for pod list to return data ...
	I0911 12:08:28.318055 2255304 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:28.323536 2255304 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:28.323578 2255304 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:28.323593 2255304 node_conditions.go:105] duration metric: took 5.532859ms to run NodePressure ...
	I0911 12:08:28.323619 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:28.927871 2255304 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:08:28.938224 2255304 kubeadm.go:787] kubelet initialised
	I0911 12:08:28.938256 2255304 kubeadm.go:788] duration metric: took 10.348938ms waiting for restarted kubelet to initialise ...
	I0911 12:08:28.938267 2255304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:28.944405 2255304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.951735 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.951774 2255304 pod_ready.go:81] duration metric: took 7.334386ms waiting for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.951786 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.951800 2255304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.964451 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "etcd-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.964487 2255304 pod_ready.go:81] duration metric: took 12.678175ms waiting for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.964499 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "etcd-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.964510 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.971472 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.971503 2255304 pod_ready.go:81] duration metric: took 6.983445ms waiting for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.971514 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.971523 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.978657 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.978691 2255304 pod_ready.go:81] duration metric: took 7.156987ms waiting for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.978704 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.978728 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:29.334593 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-proxy-855lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.334652 2255304 pod_ready.go:81] duration metric: took 355.905465ms waiting for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:29.334670 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-proxy-855lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.334683 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:29.734221 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.734262 2255304 pod_ready.go:81] duration metric: took 399.567918ms waiting for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:29.734275 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.734287 2255304 pod_ready.go:38] duration metric: took 796.006553ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:29.734313 2255304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:08:29.749280 2255304 ops.go:34] apiserver oom_adj: -16
	I0911 12:08:29.749313 2255304 kubeadm.go:640] restartCluster took 23.973623788s
	I0911 12:08:29.749325 2255304 kubeadm.go:406] StartCluster complete in 24.023033441s
	I0911 12:08:29.749349 2255304 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:29.749453 2255304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:08:29.752216 2255304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:29.752582 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:08:29.752784 2255304 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:08:29.752912 2255304 config.go:182] Loaded profile config "old-k8s-version-642215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0911 12:08:29.752947 2255304 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-642215"
	I0911 12:08:29.752971 2255304 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-642215"
	I0911 12:08:29.752976 2255304 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-642215"
	I0911 12:08:29.753016 2255304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-642215"
	W0911 12:08:29.752979 2255304 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:08:29.753159 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.752984 2255304 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-642215"
	I0911 12:08:29.753232 2255304 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-642215"
	W0911 12:08:29.753281 2255304 addons.go:240] addon metrics-server should already be in state true
	I0911 12:08:29.753369 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.753517 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.753554 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.753599 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.753630 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.753954 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.754016 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.773524 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0911 12:08:29.773614 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0911 12:08:29.774181 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.774418 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.774950 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.774967 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.775141 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0911 12:08:29.775158 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.775176 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.775584 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.775585 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.775597 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.775756 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.776112 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.776144 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.776178 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.776197 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.776510 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.776970 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.776990 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.790443 2255304 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-642215" context rescaled to 1 replicas
	I0911 12:08:29.790502 2255304 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:08:29.793918 2255304 out.go:177] * Verifying Kubernetes components...
	I0911 12:08:29.796131 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:29.798116 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38829
	I0911 12:08:29.798581 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.799554 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.799580 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.800105 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.800439 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.802764 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.805061 2255304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:29.803246 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0911 12:08:29.807001 2255304 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:29.807025 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:08:29.807053 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.807866 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.807924 2255304 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-642215"
	W0911 12:08:29.807949 2255304 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:08:29.807985 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.808406 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.808442 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.809644 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.809667 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.817010 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.817046 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.817101 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.817131 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.817158 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.817555 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.817625 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.817868 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.818244 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:29.820240 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.822846 2255304 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:08:29.824505 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:08:29.824526 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:08:29.824554 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.827924 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.828359 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.828396 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.828684 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.828950 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.829099 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.829285 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:29.830900 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0911 12:08:29.831463 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.832028 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.832049 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.832646 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.833261 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.833313 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.868600 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0911 12:08:29.869171 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.869822 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.869842 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.870236 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.870416 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.872928 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.873212 2255304 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:29.873232 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:08:29.873255 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.876313 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.876963 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.876983 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.876999 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.877168 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.877331 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.877468 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:30.019745 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:30.061364 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:08:30.061393 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:08:30.080562 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:30.100494 2255304 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 12:08:30.100511 2255304 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-642215" to be "Ready" ...
	I0911 12:08:30.120618 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:08:30.120647 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:08:30.173391 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:30.173427 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:08:30.208772 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:30.757802 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.757841 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.757982 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758021 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758294 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.758334 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758344 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758353 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758366 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758377 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.758620 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758646 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758660 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758677 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758690 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758701 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758717 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758743 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758943 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758954 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.759016 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.759052 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.759062 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.859384 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.859426 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.859828 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.859853 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.859864 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.859874 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.860302 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.860336 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.860357 2255304 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-642215"
	I0911 12:08:30.862687 2255304 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0911 12:08:29.637791 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:31.639796 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:28.150174 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:28.150294 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:28.166331 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:28.650905 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:28.650996 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:28.664146 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:29.150646 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:29.150745 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:29.166569 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:29.651031 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:29.651129 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:29.664106 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.150429 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:30.150535 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:30.167297 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.650364 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:30.650458 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:30.664180 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:31.150419 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:31.150521 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:31.168242 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:31.650834 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:31.650922 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:31.664772 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:32.150232 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:32.150362 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:32.163224 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:32.650676 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:32.650773 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:32.667077 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.864433 2255304 addons.go:502] enable addons completed in 1.111642638s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0911 12:08:32.139191 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:30.042388 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:30.043026 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:30.043054 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:30.042967 2256639 retry.go:31] will retry after 3.282840664s: waiting for machine to come up
	I0911 12:08:33.327456 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:33.328007 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:33.328066 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:33.327941 2256639 retry.go:31] will retry after 4.185053881s: waiting for machine to come up
	I0911 12:08:33.639996 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:36.139377 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:33.150668 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:33.150797 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:33.163178 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:33.650733 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:33.650845 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:33.666475 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:34.150939 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:34.151037 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:34.163985 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:34.650139 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:34.650250 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:34.664850 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:35.150224 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:35.150351 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:35.169894 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:35.650946 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:35.651044 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:35.665438 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:36.151019 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:36.151134 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:36.164843 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:36.618412 2255814 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:08:36.618460 2255814 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:08:36.618482 2255814 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:08:36.618571 2255814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:36.657264 2255814 cri.go:89] found id: ""
	I0911 12:08:36.657366 2255814 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:08:36.676222 2255814 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:08:36.686832 2255814 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:08:36.686923 2255814 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:36.699618 2255814 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:36.699654 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:36.842821 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.471899 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.699214 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.784721 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.870994 2255814 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:37.871085 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:37.894561 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:34.638777 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:37.138575 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:37.515376 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:37.515955 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:37.515989 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:37.515896 2256639 retry.go:31] will retry after 3.472863196s: waiting for machine to come up
	I0911 12:08:38.138433 2255304 node_ready.go:49] node "old-k8s-version-642215" has status "Ready":"True"
	I0911 12:08:38.138464 2255304 node_ready.go:38] duration metric: took 8.037923512s waiting for node "old-k8s-version-642215" to be "Ready" ...
	I0911 12:08:38.138475 2255304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:38.143177 2255304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.664411 2255304 pod_ready.go:92] pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.664449 2255304 pod_ready.go:81] duration metric: took 521.244524ms waiting for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.664463 2255304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.670838 2255304 pod_ready.go:92] pod "etcd-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.670876 2255304 pod_ready.go:81] duration metric: took 6.404356ms waiting for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.670890 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.679254 2255304 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.679284 2255304 pod_ready.go:81] duration metric: took 8.385069ms waiting for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.679299 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.939484 2255304 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.939514 2255304 pod_ready.go:81] duration metric: took 260.206232ms waiting for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.939529 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.337858 2255304 pod_ready.go:92] pod "kube-proxy-855lt" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:39.337894 2255304 pod_ready.go:81] duration metric: took 398.358394ms waiting for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.337907 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.738437 2255304 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:39.738465 2255304 pod_ready.go:81] duration metric: took 400.549428ms waiting for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.738479 2255304 pod_ready.go:38] duration metric: took 1.599991385s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:39.738509 2255304 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:39.738569 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.760727 2255304 api_server.go:72] duration metric: took 9.970181642s to wait for apiserver process to appear ...
	I0911 12:08:39.760774 2255304 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:39.760797 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:39.768195 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 200:
	ok
	I0911 12:08:39.769416 2255304 api_server.go:141] control plane version: v1.16.0
	I0911 12:08:39.769442 2255304 api_server.go:131] duration metric: took 8.658497ms to wait for apiserver health ...
	I0911 12:08:39.769457 2255304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:39.940647 2255304 system_pods.go:59] 7 kube-system pods found
	I0911 12:08:39.940683 2255304 system_pods.go:61] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:39.940701 2255304 system_pods.go:61] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:39.940708 2255304 system_pods.go:61] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:39.940715 2255304 system_pods.go:61] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Running
	I0911 12:08:39.940722 2255304 system_pods.go:61] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:39.940729 2255304 system_pods.go:61] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:39.940736 2255304 system_pods.go:61] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:39.940747 2255304 system_pods.go:74] duration metric: took 171.283587ms to wait for pod list to return data ...
	I0911 12:08:39.940763 2255304 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:08:40.139718 2255304 default_sa.go:45] found service account: "default"
	I0911 12:08:40.139751 2255304 default_sa.go:55] duration metric: took 198.981243ms for default service account to be created ...
	I0911 12:08:40.139763 2255304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:08:40.340959 2255304 system_pods.go:86] 7 kube-system pods found
	I0911 12:08:40.340998 2255304 system_pods.go:89] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:40.341008 2255304 system_pods.go:89] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:40.341015 2255304 system_pods.go:89] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:40.341028 2255304 system_pods.go:89] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Running
	I0911 12:08:40.341035 2255304 system_pods.go:89] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:40.341042 2255304 system_pods.go:89] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:40.341051 2255304 system_pods.go:89] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:40.341061 2255304 system_pods.go:126] duration metric: took 201.290886ms to wait for k8s-apps to be running ...
	I0911 12:08:40.341073 2255304 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:08:40.341163 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:40.359994 2255304 system_svc.go:56] duration metric: took 18.903474ms WaitForService to wait for kubelet.
	I0911 12:08:40.360036 2255304 kubeadm.go:581] duration metric: took 10.569498287s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:08:40.360063 2255304 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:40.538713 2255304 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:40.538748 2255304 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:40.538762 2255304 node_conditions.go:105] duration metric: took 178.692637ms to run NodePressure ...
	I0911 12:08:40.538778 2255304 start.go:228] waiting for startup goroutines ...
	I0911 12:08:40.538785 2255304 start.go:233] waiting for cluster config update ...
	I0911 12:08:40.538798 2255304 start.go:242] writing updated cluster config ...
	I0911 12:08:40.539175 2255304 ssh_runner.go:195] Run: rm -f paused
	I0911 12:08:40.601745 2255304 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0911 12:08:40.604230 2255304 out.go:177] 
	W0911 12:08:40.606184 2255304 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0911 12:08:40.607933 2255304 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0911 12:08:40.609870 2255304 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-642215" cluster and "default" namespace by default
	I0911 12:08:38.638441 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:40.639280 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:38.411419 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:38.910721 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.410710 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.911432 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:40.411115 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:40.438764 2255814 api_server.go:72] duration metric: took 2.567766062s to wait for apiserver process to appear ...
	I0911 12:08:40.438803 2255814 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:40.438828 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.439582 2255814 api_server.go:269] stopped: https://192.168.39.230:8444/healthz: Get "https://192.168.39.230:8444/healthz": dial tcp 192.168.39.230:8444: connect: connection refused
	I0911 12:08:40.439644 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.440098 2255814 api_server.go:269] stopped: https://192.168.39.230:8444/healthz: Get "https://192.168.39.230:8444/healthz": dial tcp 192.168.39.230:8444: connect: connection refused
	I0911 12:08:40.940202 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.989968 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.990485 2255048 main.go:141] libmachine: (no-preload-352076) Found IP for machine: 192.168.72.157
	I0911 12:08:40.990519 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has current primary IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.990530 2255048 main.go:141] libmachine: (no-preload-352076) Reserving static IP address...
	I0911 12:08:40.990910 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "no-preload-352076", mac: "52:54:00:91:89:e0", ip: "192.168.72.157"} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:40.990942 2255048 main.go:141] libmachine: (no-preload-352076) Reserved static IP address: 192.168.72.157
	I0911 12:08:40.991004 2255048 main.go:141] libmachine: (no-preload-352076) Waiting for SSH to be available...
	I0911 12:08:40.991024 2255048 main.go:141] libmachine: (no-preload-352076) DBG | skip adding static IP to network mk-no-preload-352076 - found existing host DHCP lease matching {name: "no-preload-352076", mac: "52:54:00:91:89:e0", ip: "192.168.72.157"}
	I0911 12:08:40.991044 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Getting to WaitForSSH function...
	I0911 12:08:40.994061 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.994417 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:40.994478 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.994612 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Using SSH client type: external
	I0911 12:08:40.994653 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa (-rw-------)
	I0911 12:08:40.994693 2255048 main.go:141] libmachine: (no-preload-352076) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:08:40.994711 2255048 main.go:141] libmachine: (no-preload-352076) DBG | About to run SSH command:
	I0911 12:08:40.994725 2255048 main.go:141] libmachine: (no-preload-352076) DBG | exit 0
	I0911 12:08:41.093865 2255048 main.go:141] libmachine: (no-preload-352076) DBG | SSH cmd err, output: <nil>: 
	I0911 12:08:41.094360 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetConfigRaw
	I0911 12:08:41.095142 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:41.098534 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.098944 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.098985 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.099319 2255048 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/config.json ...
	I0911 12:08:41.099667 2255048 machine.go:88] provisioning docker machine ...
	I0911 12:08:41.099711 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:41.100079 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.100503 2255048 buildroot.go:166] provisioning hostname "no-preload-352076"
	I0911 12:08:41.100535 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.100868 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.104253 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.104920 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.105102 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.105420 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.105864 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.106201 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.106627 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.106937 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.107432 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.107447 2255048 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-352076 && echo "no-preload-352076" | sudo tee /etc/hostname
	I0911 12:08:41.249885 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-352076
	
	I0911 12:08:41.249919 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.253419 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.253854 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.253892 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.254125 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.254373 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.254576 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.254752 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.254945 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.255592 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.255624 2255048 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-352076' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-352076/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-352076' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:08:41.394308 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:08:41.394348 2255048 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:08:41.394378 2255048 buildroot.go:174] setting up certificates
	I0911 12:08:41.394388 2255048 provision.go:83] configureAuth start
	I0911 12:08:41.394401 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.394737 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:41.398042 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.398506 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.398540 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.398747 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.401368 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.401743 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.401797 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.401939 2255048 provision.go:138] copyHostCerts
	I0911 12:08:41.402020 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:08:41.402034 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:08:41.402102 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:08:41.402226 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:08:41.402238 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:08:41.402278 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:08:41.402374 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:08:41.402386 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:08:41.402413 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:08:41.402501 2255048 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.no-preload-352076 san=[192.168.72.157 192.168.72.157 localhost 127.0.0.1 minikube no-preload-352076]
	I0911 12:08:41.717751 2255048 provision.go:172] copyRemoteCerts
	I0911 12:08:41.717828 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:08:41.717865 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.721152 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.721457 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.721499 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.721720 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.721943 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.722111 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.722261 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:41.818932 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 12:08:41.846852 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:08:41.875977 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0911 12:08:41.905364 2255048 provision.go:86] duration metric: configureAuth took 510.946609ms
	I0911 12:08:41.905401 2255048 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:08:41.905662 2255048 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:41.905762 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.909182 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.909656 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.909725 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.909913 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.910149 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.910342 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.910487 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.910649 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.911134 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.911154 2255048 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:08:42.260214 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:08:42.260254 2255048 machine.go:91] provisioned docker machine in 1.16057097s
	I0911 12:08:42.260268 2255048 start.go:300] post-start starting for "no-preload-352076" (driver="kvm2")
	I0911 12:08:42.260283 2255048 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:08:42.260307 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.260700 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:08:42.260738 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.263782 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.264157 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.264197 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.264358 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.264595 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.264808 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.265010 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.356470 2255048 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:08:42.361886 2255048 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:08:42.361921 2255048 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:08:42.362004 2255048 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:08:42.362082 2255048 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:08:42.362196 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:08:42.372005 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:42.400800 2255048 start.go:303] post-start completed in 140.51468ms
	I0911 12:08:42.400850 2255048 fix.go:56] fixHost completed within 24.064734762s
	I0911 12:08:42.400882 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.404351 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.404799 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.404862 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.405055 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.405297 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.405484 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.405644 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.405859 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:42.406477 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:42.406505 2255048 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:08:42.529978 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434122.467205529
	
	I0911 12:08:42.530008 2255048 fix.go:206] guest clock: 1694434122.467205529
	I0911 12:08:42.530020 2255048 fix.go:219] Guest: 2023-09-11 12:08:42.467205529 +0000 UTC Remote: 2023-09-11 12:08:42.400855668 +0000 UTC m=+369.043734805 (delta=66.349861ms)
	I0911 12:08:42.530049 2255048 fix.go:190] guest clock delta is within tolerance: 66.349861ms
	I0911 12:08:42.530062 2255048 start.go:83] releasing machines lock for "no-preload-352076", held for 24.19398788s
	I0911 12:08:42.530094 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.530397 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:42.533425 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.533777 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.533809 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.534032 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534670 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534881 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534986 2255048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:08:42.535048 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.535193 2255048 ssh_runner.go:195] Run: cat /version.json
	I0911 12:08:42.535235 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.538009 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538210 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538356 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.538386 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538551 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.538630 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.538658 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538748 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.538862 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.538939 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.539033 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.539212 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.539226 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.539373 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.659315 2255048 ssh_runner.go:195] Run: systemctl --version
	I0911 12:08:42.666117 2255048 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:08:42.827592 2255048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:08:42.834283 2255048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:08:42.834379 2255048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:08:42.855077 2255048 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:08:42.855107 2255048 start.go:466] detecting cgroup driver to use...
	I0911 12:08:42.855187 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:08:42.871553 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:08:42.886253 2255048 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:08:42.886341 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:08:42.902211 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:08:42.917991 2255048 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:08:43.043679 2255048 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:08:43.182633 2255048 docker.go:212] disabling docker service ...
	I0911 12:08:43.182709 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:08:43.200269 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:08:43.216232 2255048 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:08:43.338376 2255048 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:08:43.460730 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:08:43.478083 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:08:43.499948 2255048 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:08:43.500018 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.513007 2255048 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:08:43.513098 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.526435 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.539976 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.553967 2255048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:08:43.568765 2255048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:08:43.580392 2255048 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:08:43.580481 2255048 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:08:43.599784 2255048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:08:43.612160 2255048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:08:43.725608 2255048 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:08:43.930261 2255048 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:08:43.930390 2255048 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:08:43.937749 2255048 start.go:534] Will wait 60s for crictl version
	I0911 12:08:43.937875 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:43.942818 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:08:43.986093 2255048 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:08:43.986210 2255048 ssh_runner.go:195] Run: crio --version
	I0911 12:08:44.042887 2255048 ssh_runner.go:195] Run: crio --version
	I0911 12:08:44.106673 2255048 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:08:45.592797 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:45.592855 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:45.592874 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:45.637810 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:45.637846 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:45.940997 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:45.947826 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:08:45.947867 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:08:46.440462 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:46.449732 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:08:46.449772 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:08:46.940777 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:46.946988 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 200:
	ok
	I0911 12:08:46.957787 2255814 api_server.go:141] control plane version: v1.28.1
	I0911 12:08:46.957832 2255814 api_server.go:131] duration metric: took 6.519019358s to wait for apiserver health ...
	I0911 12:08:46.957845 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:08:46.957854 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:46.960358 2255814 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:08:43.138628 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:45.640990 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:46.962120 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:08:46.987804 2255814 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:08:47.021845 2255814 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:47.042508 2255814 system_pods.go:59] 8 kube-system pods found
	I0911 12:08:47.042560 2255814 system_pods.go:61] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:08:47.042575 2255814 system_pods.go:61] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:08:47.042585 2255814 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:08:47.042600 2255814 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:08:47.042612 2255814 system_pods.go:61] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:08:47.042624 2255814 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:08:47.042641 2255814 system_pods.go:61] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:08:47.042652 2255814 system_pods.go:61] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:08:47.042663 2255814 system_pods.go:74] duration metric: took 20.787272ms to wait for pod list to return data ...
	I0911 12:08:47.042677 2255814 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:47.048412 2255814 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:47.048524 2255814 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:47.048547 2255814 node_conditions.go:105] duration metric: took 5.861231ms to run NodePressure ...
	I0911 12:08:47.048595 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:47.550933 2255814 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:08:47.556511 2255814 kubeadm.go:787] kubelet initialised
	I0911 12:08:47.556543 2255814 kubeadm.go:788] duration metric: took 5.579487ms waiting for restarted kubelet to initialise ...
	I0911 12:08:47.556554 2255814 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:47.563694 2255814 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.569943 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.569975 2255814 pod_ready.go:81] duration metric: took 6.244443ms waiting for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.569986 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.570001 2255814 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.576703 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.576777 2255814 pod_ready.go:81] duration metric: took 6.7656ms waiting for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.576791 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.576805 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.587740 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.587788 2255814 pod_ready.go:81] duration metric: took 10.95451ms waiting for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.587813 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.587833 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.596430 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.596468 2255814 pod_ready.go:81] duration metric: took 8.617854ms waiting for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.596481 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.596492 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.956009 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-proxy-ldgjr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.956047 2255814 pod_ready.go:81] duration metric: took 359.546333ms waiting for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.956060 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-proxy-ldgjr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.956078 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:44.108577 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:44.112208 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:44.112736 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:44.112782 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:44.113074 2255048 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0911 12:08:44.119517 2255048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:44.140305 2255048 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:08:44.140398 2255048 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:44.184487 2255048 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:08:44.184529 2255048 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 12:08:44.184600 2255048 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.184910 2255048 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.185117 2255048 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.185240 2255048 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.185366 2255048 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.185790 2255048 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.185987 2255048 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0911 12:08:44.186471 2255048 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.186856 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.186943 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.187105 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.187306 2255048 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.187523 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.187570 2255048 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0911 12:08:44.188031 2255048 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.188698 2255048 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.350727 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.351429 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.353625 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.356576 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.374129 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0911 12:08:44.385524 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.410764 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.472311 2255048 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0911 12:08:44.472382 2255048 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.472453 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.572121 2255048 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0911 12:08:44.572186 2255048 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.572258 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.589426 2255048 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0911 12:08:44.589558 2255048 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.589492 2255048 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0911 12:08:44.589638 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.589643 2255048 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.589692 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691568 2255048 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0911 12:08:44.691627 2255048 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.691657 2255048 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0911 12:08:44.691734 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.691767 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.691749 2255048 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.691816 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691705 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691943 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.691955 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.729362 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.778025 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0911 12:08:44.778152 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0911 12:08:44.778215 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:44.778280 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.799788 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0911 12:08:44.799952 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:08:44.799997 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.800112 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.800183 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0911 12:08:44.800283 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:44.851138 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0911 12:08:44.851174 2255048 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.851192 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0911 12:08:44.851227 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0911 12:08:44.851239 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.851141 2255048 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0911 12:08:44.851363 2255048 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.851430 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.896214 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0911 12:08:44.896261 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0911 12:08:44.896310 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0911 12:08:44.896376 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:44.896377 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:08:46.231671 2255048 ssh_runner.go:235] Completed: which crictl: (1.380174214s)
	I0911 12:08:46.231732 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1: (1.33531707s)
	I0911 12:08:46.231734 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.38044194s)
	I0911 12:08:46.231760 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0911 12:08:46.231767 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0911 12:08:46.231780 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:46.231781 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:46.231821 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:46.231777 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1: (1.335378451s)
	I0911 12:08:46.231904 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0911 12:08:48.356501 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.356547 2255814 pod_ready.go:81] duration metric: took 400.453753ms waiting for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:48.356563 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.356575 2255814 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:48.756718 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.756761 2255814 pod_ready.go:81] duration metric: took 400.17438ms waiting for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:48.756775 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.756786 2255814 pod_ready.go:38] duration metric: took 1.200219314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:48.756806 2255814 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:08:48.775561 2255814 ops.go:34] apiserver oom_adj: -16
	I0911 12:08:48.775587 2255814 kubeadm.go:640] restartCluster took 22.189536767s
	I0911 12:08:48.775598 2255814 kubeadm.go:406] StartCluster complete in 22.23955062s
	I0911 12:08:48.775621 2255814 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:48.775713 2255814 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:08:48.778091 2255814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:48.778397 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:08:48.778424 2255814 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:08:48.778566 2255814 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.778597 2255814 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-484027"
	W0911 12:08:48.778614 2255814 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:08:48.778648 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:48.778696 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.778718 2255814 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.778734 2255814 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-484027"
	I0911 12:08:48.779141 2255814 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.779145 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779159 2255814 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-484027"
	W0911 12:08:48.779167 2255814 addons.go:240] addon metrics-server should already be in state true
	I0911 12:08:48.779173 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.779207 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.779289 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779343 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.779509 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779556 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.786929 2255814 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-484027" context rescaled to 1 replicas
	I0911 12:08:48.786996 2255814 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:08:48.789204 2255814 out.go:177] * Verifying Kubernetes components...
	I0911 12:08:48.790973 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:48.799774 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I0911 12:08:48.800366 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.800462 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0911 12:08:48.801065 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.801286 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.801312 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.802064 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.802091 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.802105 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0911 12:08:48.802166 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.802495 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.802706 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.802842 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.802873 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.802873 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.803804 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.803827 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.804437 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.805108 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.805156 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.823113 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0911 12:08:48.823705 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.824347 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.824378 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.824848 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.825073 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.827337 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.827355 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43649
	I0911 12:08:48.830403 2255814 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:48.827726 2255814 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-484027"
	I0911 12:08:48.828116 2255814 main.go:141] libmachine: () Calling .GetVersion
	W0911 12:08:48.832240 2255814 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:08:48.832297 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.832351 2255814 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:48.832372 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:08:48.832404 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.832767 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.832846 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.833819 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.833843 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.834348 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.834583 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.836499 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.837953 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.838586 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.838616 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.838662 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.838863 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.839009 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.839383 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.848085 2255814 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:08:48.850041 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:08:48.850077 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:08:48.850117 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.853766 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.854286 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.854324 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.854695 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.855024 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.855222 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.855427 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.857253 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
	I0911 12:08:48.858013 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.858572 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.858593 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.858922 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.859424 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.859461 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.877066 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0911 12:08:48.877762 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.878430 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.878451 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.878986 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.879214 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.881452 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.881771 2255814 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:48.881790 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:08:48.881810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.885826 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.886380 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.886406 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.887000 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.887261 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.887456 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.887604 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.990643 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:49.087344 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:08:49.087379 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:08:49.088448 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:49.172284 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:08:49.172325 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:08:49.284010 2255814 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-484027" to be "Ready" ...
	I0911 12:08:49.284396 2255814 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 12:08:49.296054 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:49.296086 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:08:49.379706 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:51.018731 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.028036666s)
	I0911 12:08:51.018796 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.018810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.018733 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.930229373s)
	I0911 12:08:51.018900 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.018920 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.019201 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.019252 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.019291 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.019304 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.019315 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.019325 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.019420 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.019433 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.019445 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.019457 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.021142 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.021184 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.021199 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021204 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021223 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021223 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021238 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.021259 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.021542 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021615 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021683 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.122492 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.742646501s)
	I0911 12:08:51.122563 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.122582 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.123183 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.123183 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.123214 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.123224 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.123232 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.123668 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.123713 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.123729 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.123743 2255814 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-484027"
	I0911 12:08:51.126333 2255814 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0911 12:08:48.273682 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:50.640588 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:51.128042 2255814 addons.go:502] enable addons completed in 2.34962006s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0911 12:08:51.299348 2255814 node_ready.go:58] node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:49.857883 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (3.62602487s)
	I0911 12:08:49.857920 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0911 12:08:49.857935 2255048 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:49.858008 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:49.858007 2255048 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.626200516s)
	I0911 12:08:49.858128 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0911 12:08:49.858215 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:08:53.140732 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:55.639106 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:53.799851 2255814 node_ready.go:58] node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:56.661585 2255814 node_ready.go:49] node "default-k8s-diff-port-484027" has status "Ready":"True"
	I0911 12:08:56.661621 2255814 node_ready.go:38] duration metric: took 7.377564832s waiting for node "default-k8s-diff-port-484027" to be "Ready" ...
	I0911 12:08:56.661651 2255814 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:56.675600 2255814 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.686880 2255814 pod_ready.go:92] pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:56.686977 2255814 pod_ready.go:81] duration metric: took 11.34453ms waiting for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.687027 2255814 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.695897 2255814 pod_ready.go:92] pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:56.695991 2255814 pod_ready.go:81] duration metric: took 8.931143ms waiting for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.696011 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:57.305638 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (7.447392742s)
	I0911 12:08:57.305689 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0911 12:08:57.305809 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.447768556s)
	I0911 12:08:57.305836 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0911 12:08:57.305855 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:57.305932 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:58.142333 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:00.644281 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:58.721936 2255814 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.721964 2255814 pod_ready.go:81] duration metric: took 2.025944093s waiting for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.721978 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.728483 2255814 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.728509 2255814 pod_ready.go:81] duration metric: took 6.525188ms waiting for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.728522 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.868777 2255814 pod_ready.go:92] pod "kube-proxy-ldgjr" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.868821 2255814 pod_ready.go:81] duration metric: took 140.280926ms waiting for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.868839 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:59.266668 2255814 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:59.266699 2255814 pod_ready.go:81] duration metric: took 397.852252ms waiting for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:59.266710 2255814 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:01.578711 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:00.172738 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.866760661s)
	I0911 12:09:00.172779 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0911 12:09:00.172904 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:09:00.172989 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:09:01.745988 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.572965994s)
	I0911 12:09:01.746029 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0911 12:09:01.746047 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:09:01.746105 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:09:03.140327 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:05.141268 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:04.080056 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:06.578690 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:03.814358 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (2.068208039s)
	I0911 12:09:03.814432 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0911 12:09:03.814452 2255048 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:09:03.814516 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:09:04.982461 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.167909383s)
	I0911 12:09:04.982505 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0911 12:09:04.982542 2255048 cache_images.go:123] Successfully loaded all cached images
	I0911 12:09:04.982549 2255048 cache_images.go:92] LoadImages completed in 20.798002598s
	I0911 12:09:04.982647 2255048 ssh_runner.go:195] Run: crio config
	I0911 12:09:05.047992 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:09:05.048024 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:09:05.048049 2255048 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:09:05.048070 2255048 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.157 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-352076 NodeName:no-preload-352076 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:09:05.048268 2255048 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-352076"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.157
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.157"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:09:05.048352 2255048 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-352076 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-352076 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:09:05.048427 2255048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:09:05.060720 2255048 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:09:05.060881 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:09:05.072228 2255048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0911 12:09:05.093943 2255048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:09:05.113383 2255048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0911 12:09:05.136859 2255048 ssh_runner.go:195] Run: grep 192.168.72.157	control-plane.minikube.internal$ /etc/hosts
	I0911 12:09:05.143807 2255048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.157	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:09:05.160629 2255048 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076 for IP: 192.168.72.157
	I0911 12:09:05.160686 2255048 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:09:05.161057 2255048 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:09:05.161131 2255048 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:09:05.161253 2255048 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.key
	I0911 12:09:05.161367 2255048 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.key.66fc92c5
	I0911 12:09:05.161447 2255048 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.key
	I0911 12:09:05.161605 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:09:05.161646 2255048 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:09:05.161655 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:09:05.161696 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:09:05.161745 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:09:05.161773 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:09:05.161838 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:09:05.162864 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:09:05.196273 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 12:09:05.226310 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:09:05.259094 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:09:05.296329 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:09:05.329537 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:09:05.363893 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:09:05.398183 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:09:05.431986 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:09:05.462584 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:09:05.494047 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:09:05.531243 2255048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:09:05.554858 2255048 ssh_runner.go:195] Run: openssl version
	I0911 12:09:05.564158 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:09:05.578611 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.585480 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.585563 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.592835 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:09:05.606413 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:09:05.618978 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.626101 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.626179 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.634526 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:09:05.648394 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:09:05.664598 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.671632 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.671734 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.679143 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:09:05.691797 2255048 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:09:05.698734 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:09:05.706797 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:09:05.713927 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:09:05.721394 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:09:05.728652 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:09:05.736364 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:09:05.744505 2255048 kubeadm.go:404] StartCluster: {Name:no-preload-352076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-352076 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:09:05.744673 2255048 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:09:05.744751 2255048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:09:05.783568 2255048 cri.go:89] found id: ""
	I0911 12:09:05.783665 2255048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:09:05.794403 2255048 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:09:05.794443 2255048 kubeadm.go:636] restartCluster start
	I0911 12:09:05.794532 2255048 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:09:05.808458 2255048 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:05.809808 2255048 kubeconfig.go:92] found "no-preload-352076" server: "https://192.168.72.157:8443"
	I0911 12:09:05.812541 2255048 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:09:05.824406 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:05.824488 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:05.838004 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:05.838029 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:05.838081 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:05.850725 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:06.351553 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:06.351683 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:06.365583 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:06.851068 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:06.851203 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:06.865829 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.351654 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:07.351840 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:07.365462 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.851109 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:07.851227 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:07.865132 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:08.351854 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:08.351980 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:08.364980 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.637342 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:09.637899 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:11.638591 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:09.078188 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:11.575790 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:08.850933 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:08.851079 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:08.865313 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:09.350825 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:09.350918 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:09.363633 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:09.850908 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:09.851009 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:09.864051 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:10.351371 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:10.351459 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:10.364187 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:10.851868 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:10.851993 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:10.865706 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:11.351327 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:11.351445 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:11.364860 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:11.851490 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:11.851579 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:11.865090 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:12.351698 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:12.351841 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:12.365554 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:12.851082 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:12.851189 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:12.863359 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:13.351652 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:13.351762 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:13.364220 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:13.638913 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:16.138385 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:14.075701 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:16.083424 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:13.851558 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:13.851650 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:13.864548 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:14.351104 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:14.351196 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:14.363567 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:14.851181 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:14.851287 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:14.865371 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:15.351813 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:15.351921 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:15.364728 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:15.825491 2255048 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:09:15.825532 2255048 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:09:15.825549 2255048 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:09:15.825628 2255048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:09:15.863098 2255048 cri.go:89] found id: ""
	I0911 12:09:15.863207 2255048 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:09:15.881673 2255048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:09:15.892264 2255048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:09:15.892363 2255048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:09:15.903142 2255048 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:09:15.903168 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:16.075542 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.073042 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.305269 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.399770 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.484630 2255048 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:09:17.484713 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:17.502746 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:18.017919 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:18.139562 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:20.643130 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:18.578074 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:21.077490 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:18.517850 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:19.018007 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:19.518125 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:20.018229 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:20.062967 2255048 api_server.go:72] duration metric: took 2.578334133s to wait for apiserver process to appear ...
	I0911 12:09:20.062999 2255048 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:09:20.063024 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:20.063765 2255048 api_server.go:269] stopped: https://192.168.72.157:8443/healthz: Get "https://192.168.72.157:8443/healthz": dial tcp 192.168.72.157:8443: connect: connection refused
	I0911 12:09:20.063812 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:20.064348 2255048 api_server.go:269] stopped: https://192.168.72.157:8443/healthz: Get "https://192.168.72.157:8443/healthz": dial tcp 192.168.72.157:8443: connect: connection refused
	I0911 12:09:20.564847 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.276251 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:09:24.276297 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:09:24.276314 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.320049 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:09:24.320081 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:09:24.564444 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.570484 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:09:24.570524 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:09:25.064830 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:25.071229 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:09:25.071269 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:09:25.564901 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:25.570887 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
	ok
	I0911 12:09:25.580713 2255048 api_server.go:141] control plane version: v1.28.1
	I0911 12:09:25.580746 2255048 api_server.go:131] duration metric: took 5.517738896s to wait for apiserver health ...
	I0911 12:09:25.580759 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:09:25.580768 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:09:25.583425 2255048 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:09:23.139085 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.140565 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:23.077522 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.576471 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.585300 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:09:25.610742 2255048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:09:25.660757 2255048 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:09:25.680043 2255048 system_pods.go:59] 8 kube-system pods found
	I0911 12:09:25.680087 2255048 system_pods.go:61] "coredns-5dd5756b68-mghg7" [380c0d4e-d7e3-4434-9757-f4debc5206d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:09:25.680104 2255048 system_pods.go:61] "etcd-no-preload-352076" [4f74cb61-25fb-4478-afd4-3b0f0ef1bdae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:09:25.680115 2255048 system_pods.go:61] "kube-apiserver-no-preload-352076" [09ed0349-f0dc-4ab0-b057-230daeb8e7c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:09:25.680127 2255048 system_pods.go:61] "kube-controller-manager-no-preload-352076" [c93ec6ac-408b-4859-b45b-79bb3e3b53d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:09:25.680142 2255048 system_pods.go:61] "kube-proxy-f748l" [8379d15e-e886-48cb-8a53-3a48aef7c9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:09:25.680157 2255048 system_pods.go:61] "kube-scheduler-no-preload-352076" [7e7068d1-7f6b-4fe7-b1f4-73ddab4c7db4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:09:25.680174 2255048 system_pods.go:61] "metrics-server-57f55c9bc5-tvrkk" [7b463025-d2f8-4f1d-aa69-740cd828c1d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:09:25.680188 2255048 system_pods.go:61] "storage-provisioner" [52928c2e-1383-41b0-817c-203d016da7df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:09:25.680201 2255048 system_pods.go:74] duration metric: took 19.417405ms to wait for pod list to return data ...
	I0911 12:09:25.680220 2255048 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:09:25.685088 2255048 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:09:25.685127 2255048 node_conditions.go:123] node cpu capacity is 2
	I0911 12:09:25.685144 2255048 node_conditions.go:105] duration metric: took 4.914847ms to run NodePressure ...
	I0911 12:09:25.685170 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:26.127026 2255048 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:09:26.137211 2255048 kubeadm.go:787] kubelet initialised
	I0911 12:09:26.137247 2255048 kubeadm.go:788] duration metric: took 10.126758ms waiting for restarted kubelet to initialise ...
	I0911 12:09:26.137258 2255048 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:09:26.144732 2255048 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:28.168555 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:27.637951 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.142107 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:32.144784 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:28.078707 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.575535 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:32.575917 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.169198 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:31.168599 2255048 pod_ready.go:92] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:31.168623 2255048 pod_ready.go:81] duration metric: took 5.02386193s waiting for pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:31.168633 2255048 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:32.194954 2255048 pod_ready.go:92] pod "etcd-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:32.194986 2255048 pod_ready.go:81] duration metric: took 1.026346965s waiting for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:32.194997 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:33.218527 2255048 pod_ready.go:92] pod "kube-apiserver-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:33.218555 2255048 pod_ready.go:81] duration metric: took 1.02355184s waiting for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:33.218568 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:34.637330 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:36.638472 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:34.577030 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:37.076594 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:35.576857 2255048 pod_ready.go:102] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:38.072765 2255048 pod_ready.go:92] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.072791 2255048 pod_ready.go:81] duration metric: took 4.854217828s waiting for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.072807 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f748l" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.080177 2255048 pod_ready.go:92] pod "kube-proxy-f748l" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.080219 2255048 pod_ready.go:81] duration metric: took 7.386736ms waiting for pod "kube-proxy-f748l" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.080234 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.086910 2255048 pod_ready.go:92] pod "kube-scheduler-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.086935 2255048 pod_ready.go:81] duration metric: took 6.692353ms waiting for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.086947 2255048 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:39.139899 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:41.638556 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:39.076977 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:41.077356 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:40.275588 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:42.279343 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:44.140467 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.638950 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:43.575930 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.075946 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:44.773655 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.773783 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.639947 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:51.136953 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.076228 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:50.076280 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:52.575191 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.781871 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:51.276719 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:53.137841 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:55.639201 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:54.575724 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:56.577539 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:53.774303 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:55.775398 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:57.776172 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:58.137820 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:00.140032 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:59.075343 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:01.077352 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:00.274288 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:02.281024 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:02.637659 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:04.638359 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:07.138194 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:03.576039 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:05.581746 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:04.774609 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:06.777649 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:09.638158 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:12.138452 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:08.086089 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:10.577034 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:09.274229 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:11.773772 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:14.637905 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:17.137141 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:13.075497 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:15.075928 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:17.077025 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:13.777087 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:16.273244 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:18.274393 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:19.138225 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:21.638206 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:19.574944 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:21.577126 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:20.274987 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:22.774026 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:23.638427 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:25.639796 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:24.077660 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:26.576065 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:25.274996 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:27.773877 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:28.143807 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:30.639138 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:28.576550 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:31.076503 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:29.775191 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:32.275040 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:33.137429 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:35.137961 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:37.141067 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:33.575704 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:35.576704 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:34.773882 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:36.774534 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:39.637647 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:41.639902 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:38.076297 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:40.577008 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:38.774671 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:41.274312 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:43.274935 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:44.137187 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:46.141314 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:43.079758 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:45.589530 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:45.774930 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.273321 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.638868 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:51.139417 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.076212 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:50.078989 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:52.575259 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:50.274454 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:52.275086 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:53.637980 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:55.638403 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:54.575452 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:56.575714 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:54.777442 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:57.273658 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:58.136668 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:00.137799 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:59.077541 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:01.576462 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:59.275476 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:01.773680 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:02.636537 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:04.637865 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:07.136712 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:04.078863 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:06.577886 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:03.776995 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:06.274574 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:08.275266 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:09.137886 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:11.147508 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:09.075793 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:11.575828 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:10.275357 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:12.775241 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:13.638603 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:16.137986 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:14.076435 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:16.078427 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:15.275325 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:17.275446 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:18.138511 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:20.638477 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:18.575789 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:20.575987 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:22.576545 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:19.774865 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:22.280364 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:23.138801 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:25.638249 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:24.577693 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:26.581497 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:24.774606 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:27.274878 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:27.639126 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.640834 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:32.138497 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.079788 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:31.575364 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.774769 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:31.777925 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:34.636906 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:36.640855 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:33.576041 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:35.577513 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:34.275601 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:36.282120 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:39.138445 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:41.638724 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:38.074500 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:40.077237 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:42.078135 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:38.774882 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:40.776485 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:43.277653 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:43.639224 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:46.137265 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:44.574433 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:46.576378 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:45.776572 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.275210 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.137470 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:50.638249 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.580531 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:51.076018 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:50.775117 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:52.775535 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:52.641468 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.138561 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.138875 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:53.078788 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.079529 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.577003 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.274582 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.774611 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:59.637786 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:01.644407 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:00.075246 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:02.078022 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:00.274022 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:02.275711 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.137692 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.614957 2255187 pod_ready.go:81] duration metric: took 4m0.000726123s waiting for pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace to be "Ready" ...
	E0911 12:12:04.614999 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:12:04.615020 2255187 pod_ready.go:38] duration metric: took 4m6.604014313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:12:04.615056 2255187 kubeadm.go:640] restartCluster took 4m25.597873734s
	W0911 12:12:04.615156 2255187 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0911 12:12:04.615268 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0911 12:12:04.576764 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:06.579533 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.779450 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:07.276202 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:08.580439 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:11.075465 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:09.277634 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:11.776920 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:13.076473 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:15.077335 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:17.574470 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:14.276806 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:16.774759 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:19.576080 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:22.078686 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:18.775173 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:21.274723 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:23.276576 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:24.082590 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:26.584485 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:25.277284 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:27.774953 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:29.079400 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:31.575879 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:30.278194 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:32.773872 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:37.434471 2255187 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.819147659s)
	I0911 12:12:37.434634 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:12:37.450370 2255187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:12:37.463019 2255187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:12:37.473313 2255187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:12:37.473375 2255187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:12:33.578208 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:36.076227 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:34.775135 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:36.775239 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:37.703004 2255187 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:12:38.574884 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:40.577027 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:38.779298 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:41.274039 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:43.076990 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:45.077566 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:47.576057 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:43.775208 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:45.775382 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:48.274401 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:49.022486 2255187 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 12:12:49.022566 2255187 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 12:12:49.022667 2255187 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 12:12:49.022825 2255187 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 12:12:49.022994 2255187 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 12:12:49.023081 2255187 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 12:12:49.025047 2255187 out.go:204]   - Generating certificates and keys ...
	I0911 12:12:49.025151 2255187 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 12:12:49.025249 2255187 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 12:12:49.025340 2255187 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0911 12:12:49.025428 2255187 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0911 12:12:49.025521 2255187 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0911 12:12:49.025599 2255187 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0911 12:12:49.025703 2255187 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0911 12:12:49.025801 2255187 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0911 12:12:49.025898 2255187 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0911 12:12:49.026021 2255187 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0911 12:12:49.026083 2255187 kubeadm.go:322] [certs] Using the existing "sa" key
	I0911 12:12:49.026163 2255187 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 12:12:49.026252 2255187 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 12:12:49.026338 2255187 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 12:12:49.026436 2255187 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 12:12:49.026518 2255187 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 12:12:49.026609 2255187 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 12:12:49.026694 2255187 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 12:12:49.028378 2255187 out.go:204]   - Booting up control plane ...
	I0911 12:12:49.028469 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 12:12:49.028538 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 12:12:49.028632 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 12:12:49.028759 2255187 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 12:12:49.028894 2255187 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 12:12:49.028960 2255187 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 12:12:49.029126 2255187 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 12:12:49.029225 2255187 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504895 seconds
	I0911 12:12:49.029346 2255187 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:12:49.029485 2255187 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:12:49.029568 2255187 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:12:49.029801 2255187 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-235462 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:12:49.029864 2255187 kubeadm.go:322] [bootstrap-token] Using token: u1pjdn.ynd5x30gs2d5ngse
	I0911 12:12:49.031514 2255187 out.go:204]   - Configuring RBAC rules ...
	I0911 12:12:49.031635 2255187 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:12:49.031766 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:12:49.031961 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:12:49.032100 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:12:49.032234 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:12:49.032370 2255187 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:12:49.032513 2255187 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:12:49.032569 2255187 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:12:49.032641 2255187 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:12:49.032653 2255187 kubeadm.go:322] 
	I0911 12:12:49.032721 2255187 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:12:49.032733 2255187 kubeadm.go:322] 
	I0911 12:12:49.032850 2255187 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:12:49.032862 2255187 kubeadm.go:322] 
	I0911 12:12:49.032897 2255187 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:12:49.032954 2255187 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:12:49.033027 2255187 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:12:49.033034 2255187 kubeadm.go:322] 
	I0911 12:12:49.033113 2255187 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:12:49.033125 2255187 kubeadm.go:322] 
	I0911 12:12:49.033185 2255187 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:12:49.033194 2255187 kubeadm.go:322] 
	I0911 12:12:49.033272 2255187 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:12:49.033364 2255187 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:12:49.033478 2255187 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:12:49.033488 2255187 kubeadm.go:322] 
	I0911 12:12:49.033592 2255187 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:12:49.033674 2255187 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:12:49.033681 2255187 kubeadm.go:322] 
	I0911 12:12:49.033793 2255187 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token u1pjdn.ynd5x30gs2d5ngse \
	I0911 12:12:49.033940 2255187 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:12:49.033981 2255187 kubeadm.go:322] 	--control-plane 
	I0911 12:12:49.033994 2255187 kubeadm.go:322] 
	I0911 12:12:49.034117 2255187 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:12:49.034140 2255187 kubeadm.go:322] 
	I0911 12:12:49.034253 2255187 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token u1pjdn.ynd5x30gs2d5ngse \
	I0911 12:12:49.034398 2255187 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:12:49.034424 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:12:49.034438 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:12:49.036358 2255187 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:12:49.037952 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:12:49.078613 2255187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:12:49.171320 2255187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:12:49.171458 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.171492 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=embed-certs-235462 minikube.k8s.io/updated_at=2023_09_11T12_12_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.227806 2255187 ops.go:34] apiserver oom_adj: -16
	I0911 12:12:49.533909 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.637357 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:50.234909 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:50.734249 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:51.234928 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:51.734543 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:52.235022 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.576947 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.075970 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:50.275288 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.775973 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.734323 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:53.234558 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:53.734598 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.235197 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.734524 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:55.234539 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:55.734806 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:56.234833 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:56.734868 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:57.235336 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.574674 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:56.577723 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:54.777705 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:57.274282 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:57.735164 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:58.234340 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:58.734332 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:59.234884 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:59.734265 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:13:00.234310 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:13:00.376532 2255187 kubeadm.go:1081] duration metric: took 11.205145428s to wait for elevateKubeSystemPrivileges.
	I0911 12:13:00.376577 2255187 kubeadm.go:406] StartCluster complete in 5m21.403889838s
	I0911 12:13:00.376632 2255187 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:13:00.376754 2255187 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:13:00.379195 2255187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:13:00.379496 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:13:00.379604 2255187 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:13:00.379714 2255187 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-235462"
	I0911 12:13:00.379735 2255187 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-235462"
	W0911 12:13:00.379744 2255187 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:13:00.379770 2255187 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:13:00.379813 2255187 addons.go:69] Setting default-storageclass=true in profile "embed-certs-235462"
	I0911 12:13:00.379829 2255187 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-235462"
	I0911 12:13:00.379872 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.380021 2255187 addons.go:69] Setting metrics-server=true in profile "embed-certs-235462"
	I0911 12:13:00.380038 2255187 addons.go:231] Setting addon metrics-server=true in "embed-certs-235462"
	W0911 12:13:00.380053 2255187 addons.go:240] addon metrics-server should already be in state true
	I0911 12:13:00.380092 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.380276 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380299 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.380314 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380338 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.380443 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380464 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.400206 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0911 12:13:00.400222 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0911 12:13:00.400384 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35671
	I0911 12:13:00.400955 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.400990 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.400957 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.401597 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.401619 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.401749 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.401769 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.402081 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402237 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.402249 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.402314 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402602 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402785 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.402950 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.402972 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.402986 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.403016 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.424319 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I0911 12:13:00.424352 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I0911 12:13:00.424991 2255187 addons.go:231] Setting addon default-storageclass=true in "embed-certs-235462"
	W0911 12:13:00.425015 2255187 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:13:00.425039 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.425053 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.425387 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.425471 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.425496 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.425891 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.425904 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.426206 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.426222 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.426644 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.426842 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.428151 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.429014 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.431494 2255187 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:13:00.429852 2255187 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-235462" context rescaled to 1 replicas
	I0911 12:13:00.430039 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.433081 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:13:00.433096 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:13:00.433121 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.433184 2255187 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:13:00.438048 2255187 out.go:177] * Verifying Kubernetes components...
	I0911 12:13:00.436324 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.437532 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.438207 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.442076 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:00.442211 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.442240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.443931 2255187 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:13:00.442451 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.445563 2255187 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:13:00.445579 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.445583 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:13:00.445606 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.445674 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.449267 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0911 12:13:00.449534 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.449823 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.450240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.450270 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.450451 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.450818 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.450838 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.450906 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.451120 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.451298 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.452043 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.452652 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.452686 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.470652 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0911 12:13:00.471240 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.471865 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.471888 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.472326 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.472745 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.474485 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.475072 2255187 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:13:00.475093 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:13:00.475123 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.478333 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.478757 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.478788 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.478949 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.479157 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.479301 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.479434 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.601913 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:13:00.601946 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:13:00.629483 2255187 node_ready.go:35] waiting up to 6m0s for node "embed-certs-235462" to be "Ready" ...
	I0911 12:13:00.629938 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 12:13:00.651067 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:13:00.653504 2255187 node_ready.go:49] node "embed-certs-235462" has status "Ready":"True"
	I0911 12:13:00.653549 2255187 node_ready.go:38] duration metric: took 24.023395ms waiting for node "embed-certs-235462" to be "Ready" ...
	I0911 12:13:00.653564 2255187 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:00.663033 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:13:00.663075 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:13:00.668515 2255187 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.709787 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:13:00.751534 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:13:00.751565 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:13:00.782859 2255187 pod_ready.go:92] pod "etcd-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:00.782894 2255187 pod_ready.go:81] duration metric: took 114.332855ms waiting for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.782910 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.823512 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:13:00.891619 2255187 pod_ready.go:92] pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:00.891678 2255187 pod_ready.go:81] duration metric: took 108.758908ms waiting for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.891695 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.001447 2255187 pod_ready.go:92] pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:01.001483 2255187 pod_ready.go:81] duration metric: took 109.778603ms waiting for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.001501 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.164166 2255187 pod_ready.go:92] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:01.164205 2255187 pod_ready.go:81] duration metric: took 162.694687ms waiting for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.164216 2255187 pod_ready.go:38] duration metric: took 510.637428ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:01.164239 2255187 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:13:01.164300 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:12:59.081781 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:59.267524 2255814 pod_ready.go:81] duration metric: took 4m0.000791617s waiting for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	E0911 12:12:59.267566 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:12:59.267580 2255814 pod_ready.go:38] duration metric: took 4m2.605912471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:12:59.267603 2255814 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:12:59.267645 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:12:59.267855 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:12:59.332014 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:12:59.332042 2255814 cri.go:89] found id: ""
	I0911 12:12:59.332053 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:12:59.332135 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.338400 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:12:59.338493 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:12:59.373232 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:12:59.373284 2255814 cri.go:89] found id: ""
	I0911 12:12:59.373296 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:12:59.373371 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.379199 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:12:59.379288 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:12:59.415804 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:12:59.415840 2255814 cri.go:89] found id: ""
	I0911 12:12:59.415852 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:12:59.415940 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.422256 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:12:59.422343 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:12:59.462300 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:12:59.462327 2255814 cri.go:89] found id: ""
	I0911 12:12:59.462336 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:12:59.462392 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.467244 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:12:59.467364 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:12:59.499594 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:12:59.499619 2255814 cri.go:89] found id: ""
	I0911 12:12:59.499627 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:12:59.499697 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.504481 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:12:59.504570 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:12:59.536588 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:12:59.536620 2255814 cri.go:89] found id: ""
	I0911 12:12:59.536631 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:12:59.536701 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.541454 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:12:59.541529 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:12:59.577953 2255814 cri.go:89] found id: ""
	I0911 12:12:59.577990 2255814 logs.go:284] 0 containers: []
	W0911 12:12:59.578001 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:12:59.578010 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:12:59.578084 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:12:59.616256 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:12:59.616283 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:12:59.616288 2255814 cri.go:89] found id: ""
	I0911 12:12:59.616296 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:12:59.616350 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.621818 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.627431 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:12:59.627462 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:12:59.690633 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:12:59.690681 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:12:59.733084 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:12:59.733133 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:12:59.775174 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:12:59.775220 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:12:59.829438 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:12:59.829492 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:12:59.894842 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:12:59.894888 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:12:59.936662 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:12:59.936703 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:12:59.955507 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:12:59.955544 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:00.127082 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:00.127129 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:00.178458 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:00.178501 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:00.226759 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:00.226805 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:00.267586 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:00.267637 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:00.311431 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:00.311465 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:12:59.276905 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:01.775061 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:02.733813 2255187 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.103819607s)
	I0911 12:13:02.733859 2255187 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0911 12:13:03.298110 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.646997747s)
	I0911 12:13:03.298169 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298179 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298209 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.588380755s)
	I0911 12:13:03.298256 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298278 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298545 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298566 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.298577 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298586 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298596 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298611 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.298622 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298834 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298851 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.298891 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298904 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.299077 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.299104 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.299117 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.299127 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.299083 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.299459 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.299474 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.485702 2255187 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.321356388s)
	I0911 12:13:03.485741 2255187 api_server.go:72] duration metric: took 3.052522714s to wait for apiserver process to appear ...
	I0911 12:13:03.485748 2255187 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:13:03.485768 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:13:03.485987 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.66240811s)
	I0911 12:13:03.486070 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.486090 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.486553 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.486621 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.486642 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.486666 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.486683 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.486940 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.486956 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.486968 2255187 addons.go:467] Verifying addon metrics-server=true in "embed-certs-235462"
	I0911 12:13:03.489450 2255187 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0911 12:13:03.491514 2255187 addons.go:502] enable addons completed in 3.11190942s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0911 12:13:03.571696 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 200:
	ok
	I0911 12:13:03.576690 2255187 api_server.go:141] control plane version: v1.28.1
	I0911 12:13:03.576730 2255187 api_server.go:131] duration metric: took 90.974437ms to wait for apiserver health ...
	I0911 12:13:03.576743 2255187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:13:03.592687 2255187 system_pods.go:59] 8 kube-system pods found
	I0911 12:13:03.592734 2255187 system_pods.go:61] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.592745 2255187 system_pods.go:61] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.592753 2255187 system_pods.go:61] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.592761 2255187 system_pods.go:61] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.592769 2255187 system_pods.go:61] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.592778 2255187 system_pods.go:61] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.592787 2255187 system_pods.go:61] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.592802 2255187 system_pods.go:61] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.592839 2255187 system_pods.go:74] duration metric: took 16.087864ms to wait for pod list to return data ...
	I0911 12:13:03.592855 2255187 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:13:03.606427 2255187 default_sa.go:45] found service account: "default"
	I0911 12:13:03.606517 2255187 default_sa.go:55] duration metric: took 13.6536ms for default service account to be created ...
	I0911 12:13:03.606542 2255187 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:13:03.622692 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:03.622752 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.622765 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.622777 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.622786 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.622801 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.622814 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.622980 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.623076 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.623157 2255187 retry.go:31] will retry after 240.25273ms: missing components: kube-dns, kube-proxy
	I0911 12:13:03.874980 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:03.875031 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.875041 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.875048 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.875081 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.875094 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.875104 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.875118 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.875130 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.875163 2255187 retry.go:31] will retry after 285.300702ms: missing components: kube-dns, kube-proxy
	I0911 12:13:04.171503 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:04.171548 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:04.171558 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:04.171566 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:04.171574 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:04.171580 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:04.171587 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:04.171598 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:04.171607 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:04.171632 2255187 retry.go:31] will retry after 386.395514ms: missing components: kube-dns, kube-proxy
	I0911 12:13:04.565931 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:04.565972 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:04.565982 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:04.565991 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:04.565998 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:04.566007 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:04.566015 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:04.566025 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:04.566039 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:04.566062 2255187 retry.go:31] will retry after 526.673ms: missing components: kube-dns, kube-proxy
	I0911 12:13:05.104101 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:05.104230 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:05.104257 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:05.104277 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:05.104294 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:05.104312 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:05.104336 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:05.104353 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:05.104363 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:05.104385 2255187 retry.go:31] will retry after 628.795734ms: missing components: kube-dns, kube-proxy
	I0911 12:13:05.745358 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:05.745392 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Running
	I0911 12:13:05.745400 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:05.745408 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:05.745416 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:05.745421 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Running
	I0911 12:13:05.745427 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:05.745440 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:05.745451 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:05.745463 2255187 system_pods.go:126] duration metric: took 2.138903103s to wait for k8s-apps to be running ...
	I0911 12:13:05.745480 2255187 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:13:05.745540 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:05.762725 2255187 system_svc.go:56] duration metric: took 17.229678ms WaitForService to wait for kubelet.
	I0911 12:13:05.762766 2255187 kubeadm.go:581] duration metric: took 5.329544538s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:13:05.762793 2255187 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:13:05.767056 2255187 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:13:05.767087 2255187 node_conditions.go:123] node cpu capacity is 2
	I0911 12:13:05.767112 2255187 node_conditions.go:105] duration metric: took 4.314286ms to run NodePressure ...
	I0911 12:13:05.767131 2255187 start.go:228] waiting for startup goroutines ...
	I0911 12:13:05.767138 2255187 start.go:233] waiting for cluster config update ...
	I0911 12:13:05.767147 2255187 start.go:242] writing updated cluster config ...
	I0911 12:13:05.767462 2255187 ssh_runner.go:195] Run: rm -f paused
	I0911 12:13:05.823796 2255187 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:13:05.826336 2255187 out.go:177] * Done! kubectl is now configured to use "embed-certs-235462" cluster and "default" namespace by default
	I0911 12:13:03.450576 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:13:03.472433 2255814 api_server.go:72] duration metric: took 4m14.685379298s to wait for apiserver process to appear ...
	I0911 12:13:03.472469 2255814 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:13:03.472520 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:13:03.472614 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:13:03.515433 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:03.515471 2255814 cri.go:89] found id: ""
	I0911 12:13:03.515483 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:13:03.515560 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.521654 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:13:03.521745 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:13:03.569379 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:03.569406 2255814 cri.go:89] found id: ""
	I0911 12:13:03.569416 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:13:03.569481 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.574638 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:13:03.574723 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:13:03.610693 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:03.610722 2255814 cri.go:89] found id: ""
	I0911 12:13:03.610733 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:13:03.610794 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.615774 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:13:03.615894 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:13:03.657087 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:03.657117 2255814 cri.go:89] found id: ""
	I0911 12:13:03.657129 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:13:03.657211 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.662224 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:13:03.662315 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:13:03.698282 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:03.698359 2255814 cri.go:89] found id: ""
	I0911 12:13:03.698381 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:13:03.698466 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.704160 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:13:03.704246 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:13:03.748122 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:03.748152 2255814 cri.go:89] found id: ""
	I0911 12:13:03.748162 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:13:03.748238 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.752657 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:13:03.752742 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:13:03.786815 2255814 cri.go:89] found id: ""
	I0911 12:13:03.786853 2255814 logs.go:284] 0 containers: []
	W0911 12:13:03.786863 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:13:03.786871 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:13:03.786942 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:13:03.824384 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:03.824409 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:03.824414 2255814 cri.go:89] found id: ""
	I0911 12:13:03.824421 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:13:03.824497 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.830317 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.836320 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:03.836355 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:03.887480 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:13:03.887524 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:03.930466 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:13:03.930507 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:03.966522 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:13:03.966563 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:13:04.026111 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:13:04.026168 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:13:04.045422 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:13:04.045468 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:04.185127 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:04.185179 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:04.235047 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:04.235089 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:13:04.856084 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:13:04.856134 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:13:04.903388 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:04.903433 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:04.964861 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:04.964916 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:05.007565 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:13:05.007605 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:05.069630 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:13:05.069676 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:07.608676 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:13:07.615388 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 200:
	ok
	I0911 12:13:07.617076 2255814 api_server.go:141] control plane version: v1.28.1
	I0911 12:13:07.617101 2255814 api_server.go:131] duration metric: took 4.14462443s to wait for apiserver health ...
	I0911 12:13:07.617110 2255814 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:13:07.617138 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:13:07.617196 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:13:07.656726 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:07.656750 2255814 cri.go:89] found id: ""
	I0911 12:13:07.656760 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:13:07.656850 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.661277 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:13:07.661364 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:13:07.697717 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:07.697746 2255814 cri.go:89] found id: ""
	I0911 12:13:07.697754 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:13:07.697842 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.703800 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:13:07.703888 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:13:07.747003 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:07.747033 2255814 cri.go:89] found id: ""
	I0911 12:13:07.747043 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:13:07.747122 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.751932 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:13:07.752007 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:13:07.785348 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:07.785375 2255814 cri.go:89] found id: ""
	I0911 12:13:07.785386 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:13:07.785460 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.790170 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:13:07.790237 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:13:07.827467 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:07.827496 2255814 cri.go:89] found id: ""
	I0911 12:13:07.827510 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:13:07.827583 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.834478 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:13:07.834552 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:13:07.873739 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:07.873766 2255814 cri.go:89] found id: ""
	I0911 12:13:07.873774 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:13:07.873828 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.878424 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:13:07.878528 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:13:07.916665 2255814 cri.go:89] found id: ""
	I0911 12:13:07.916696 2255814 logs.go:284] 0 containers: []
	W0911 12:13:07.916708 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:13:07.916716 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:13:07.916780 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:13:07.950146 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:07.950172 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:07.950177 2255814 cri.go:89] found id: ""
	I0911 12:13:07.950185 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:13:07.950256 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.954996 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.959157 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:07.959189 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:08.027081 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:13:08.027112 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:03.775843 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:06.274500 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:08.079481 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:13:08.079522 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:08.118655 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:13:08.118696 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:13:08.177644 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:13:08.177690 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:13:08.192495 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:13:08.192524 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:08.338344 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:08.338388 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:08.385409 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:08.385454 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:08.420999 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:13:08.421033 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:08.457183 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:13:08.457223 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:13:08.500499 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:08.500531 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:08.550546 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:13:08.550587 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:08.584802 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:08.584854 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:13:11.626627 2255814 system_pods.go:59] 8 kube-system pods found
	I0911 12:13:11.626661 2255814 system_pods.go:61] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running
	I0911 12:13:11.626666 2255814 system_pods.go:61] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running
	I0911 12:13:11.626670 2255814 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running
	I0911 12:13:11.626675 2255814 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running
	I0911 12:13:11.626679 2255814 system_pods.go:61] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running
	I0911 12:13:11.626683 2255814 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running
	I0911 12:13:11.626690 2255814 system_pods.go:61] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:11.626696 2255814 system_pods.go:61] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running
	I0911 12:13:11.626702 2255814 system_pods.go:74] duration metric: took 4.009586477s to wait for pod list to return data ...
	I0911 12:13:11.626710 2255814 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:13:11.630703 2255814 default_sa.go:45] found service account: "default"
	I0911 12:13:11.630735 2255814 default_sa.go:55] duration metric: took 4.019315ms for default service account to be created ...
	I0911 12:13:11.630747 2255814 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:13:11.637643 2255814 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:11.637681 2255814 system_pods.go:89] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running
	I0911 12:13:11.637687 2255814 system_pods.go:89] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running
	I0911 12:13:11.637693 2255814 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running
	I0911 12:13:11.637697 2255814 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running
	I0911 12:13:11.637701 2255814 system_pods.go:89] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running
	I0911 12:13:11.637706 2255814 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running
	I0911 12:13:11.637713 2255814 system_pods.go:89] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:11.637720 2255814 system_pods.go:89] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running
	I0911 12:13:11.637727 2255814 system_pods.go:126] duration metric: took 6.974046ms to wait for k8s-apps to be running ...
	I0911 12:13:11.637734 2255814 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:13:11.637781 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:11.656267 2255814 system_svc.go:56] duration metric: took 18.513073ms WaitForService to wait for kubelet.
	I0911 12:13:11.656313 2255814 kubeadm.go:581] duration metric: took 4m22.869270451s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:13:11.656342 2255814 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:13:11.660206 2255814 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:13:11.660242 2255814 node_conditions.go:123] node cpu capacity is 2
	I0911 12:13:11.660256 2255814 node_conditions.go:105] duration metric: took 3.907675ms to run NodePressure ...
	I0911 12:13:11.660271 2255814 start.go:228] waiting for startup goroutines ...
	I0911 12:13:11.660281 2255814 start.go:233] waiting for cluster config update ...
	I0911 12:13:11.660295 2255814 start.go:242] writing updated cluster config ...
	I0911 12:13:11.660673 2255814 ssh_runner.go:195] Run: rm -f paused
	I0911 12:13:11.716963 2255814 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:13:11.719502 2255814 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-484027" cluster and "default" namespace by default
	I0911 12:13:08.774412 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:10.776103 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:13.273773 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:15.274785 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:17.776143 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:20.274491 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:22.276115 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:24.776008 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:26.776415 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:29.274644 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:31.774477 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:33.774923 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:35.776441 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:37.777677 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:38.087732 2255048 pod_ready.go:81] duration metric: took 4m0.000743055s waiting for pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace to be "Ready" ...
	E0911 12:13:38.087774 2255048 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:13:38.087805 2255048 pod_ready.go:38] duration metric: took 4m11.950533095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:38.087877 2255048 kubeadm.go:640] restartCluster took 4m32.29342443s
	W0911 12:13:38.087958 2255048 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0911 12:13:38.088001 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0911 12:14:10.169576 2255048 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.081486969s)
	I0911 12:14:10.169706 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:10.189300 2255048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:14:10.202385 2255048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:14:10.213749 2255048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:14:10.213816 2255048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:14:10.279484 2255048 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 12:14:10.279634 2255048 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 12:14:10.462302 2255048 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 12:14:10.462488 2255048 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 12:14:10.462634 2255048 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 12:14:10.659475 2255048 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 12:14:10.661923 2255048 out.go:204]   - Generating certificates and keys ...
	I0911 12:14:10.662086 2255048 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 12:14:10.662142 2255048 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 12:14:10.662223 2255048 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0911 12:14:10.662303 2255048 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0911 12:14:10.663973 2255048 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0911 12:14:10.665836 2255048 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0911 12:14:10.667292 2255048 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0911 12:14:10.668584 2255048 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0911 12:14:10.669931 2255048 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0911 12:14:10.670570 2255048 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0911 12:14:10.671008 2255048 kubeadm.go:322] [certs] Using the existing "sa" key
	I0911 12:14:10.671087 2255048 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 12:14:10.865541 2255048 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 12:14:11.063586 2255048 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 12:14:11.341833 2255048 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 12:14:11.573561 2255048 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 12:14:11.574128 2255048 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 12:14:11.577101 2255048 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 12:14:11.579311 2255048 out.go:204]   - Booting up control plane ...
	I0911 12:14:11.579427 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 12:14:11.579550 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 12:14:11.579644 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 12:14:11.598440 2255048 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 12:14:11.599446 2255048 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 12:14:11.599531 2255048 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 12:14:11.738771 2255048 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 12:14:21.243059 2255048 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503809 seconds
	I0911 12:14:21.243215 2255048 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:14:21.262148 2255048 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:14:21.802567 2255048 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:14:21.802822 2255048 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-352076 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:14:22.320035 2255048 kubeadm.go:322] [bootstrap-token] Using token: 3xtym4.6ytyj76o1n15fsq8
	I0911 12:14:22.321759 2255048 out.go:204]   - Configuring RBAC rules ...
	I0911 12:14:22.321922 2255048 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:14:22.329851 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:14:22.344882 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:14:22.349640 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:14:22.354357 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:14:22.359463 2255048 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:14:22.380068 2255048 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:14:22.713378 2255048 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:14:22.780207 2255048 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:14:22.780252 2255048 kubeadm.go:322] 
	I0911 12:14:22.780331 2255048 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:14:22.780344 2255048 kubeadm.go:322] 
	I0911 12:14:22.780441 2255048 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:14:22.780450 2255048 kubeadm.go:322] 
	I0911 12:14:22.780489 2255048 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:14:22.780568 2255048 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:14:22.780648 2255048 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:14:22.780657 2255048 kubeadm.go:322] 
	I0911 12:14:22.780757 2255048 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:14:22.780791 2255048 kubeadm.go:322] 
	I0911 12:14:22.780876 2255048 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:14:22.780895 2255048 kubeadm.go:322] 
	I0911 12:14:22.780958 2255048 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:14:22.781054 2255048 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:14:22.781157 2255048 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:14:22.781168 2255048 kubeadm.go:322] 
	I0911 12:14:22.781264 2255048 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:14:22.781363 2255048 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:14:22.781374 2255048 kubeadm.go:322] 
	I0911 12:14:22.781490 2255048 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3xtym4.6ytyj76o1n15fsq8 \
	I0911 12:14:22.781618 2255048 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:14:22.781684 2255048 kubeadm.go:322] 	--control-plane 
	I0911 12:14:22.781695 2255048 kubeadm.go:322] 
	I0911 12:14:22.781813 2255048 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:14:22.781830 2255048 kubeadm.go:322] 
	I0911 12:14:22.781956 2255048 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3xtym4.6ytyj76o1n15fsq8 \
	I0911 12:14:22.782107 2255048 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:14:22.783393 2255048 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:14:22.783423 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:14:22.783434 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:14:22.785623 2255048 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:14:22.787278 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:14:22.817914 2255048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:14:22.857165 2255048 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:14:22.857266 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:22.857282 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=no-preload-352076 minikube.k8s.io/updated_at=2023_09_11T12_14_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:23.375677 2255048 ops.go:34] apiserver oom_adj: -16
	I0911 12:14:23.375731 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:23.497980 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:24.128149 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:24.627110 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:25.127658 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:25.627595 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:26.127143 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:26.627803 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:27.128061 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:27.627169 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:28.128081 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:28.628055 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:29.127187 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:29.627707 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:30.127233 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:30.627943 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:31.127222 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:31.627921 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:32.127760 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:32.628112 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:33.128107 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:33.627835 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:34.127171 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:34.627113 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:35.127499 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:35.627255 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:36.127199 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:36.314187 2255048 kubeadm.go:1081] duration metric: took 13.456994708s to wait for elevateKubeSystemPrivileges.
	I0911 12:14:36.314241 2255048 kubeadm.go:406] StartCluster complete in 5m30.569752421s
	I0911 12:14:36.314272 2255048 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:14:36.314446 2255048 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:14:36.317402 2255048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:14:36.317739 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:14:36.318031 2255048 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:14:36.317936 2255048 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:14:36.318110 2255048 addons.go:69] Setting storage-provisioner=true in profile "no-preload-352076"
	I0911 12:14:36.318135 2255048 addons.go:231] Setting addon storage-provisioner=true in "no-preload-352076"
	I0911 12:14:36.318137 2255048 addons.go:69] Setting default-storageclass=true in profile "no-preload-352076"
	I0911 12:14:36.318148 2255048 addons.go:69] Setting metrics-server=true in profile "no-preload-352076"
	I0911 12:14:36.318163 2255048 addons.go:231] Setting addon metrics-server=true in "no-preload-352076"
	I0911 12:14:36.318164 2255048 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-352076"
	W0911 12:14:36.318169 2255048 addons.go:240] addon metrics-server should already be in state true
	I0911 12:14:36.318218 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	W0911 12:14:36.318143 2255048 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:14:36.318318 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	I0911 12:14:36.318696 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318710 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318720 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318723 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.318738 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.318741 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.337905 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0911 12:14:36.338002 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0911 12:14:36.338589 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.338678 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.339313 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.339317 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.339340 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.339363 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.339435 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0911 12:14:36.339903 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.339909 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.339981 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.340160 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.340463 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.340496 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.340588 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.340617 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.341051 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.341512 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.341540 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.359712 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0911 12:14:36.360342 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.360914 2255048 addons.go:231] Setting addon default-storageclass=true in "no-preload-352076"
	W0911 12:14:36.360941 2255048 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:14:36.360969 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	I0911 12:14:36.360969 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.360984 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.361238 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.361271 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.361350 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.361540 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.362624 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0911 12:14:36.363381 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.363731 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.364093 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.364114 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.366385 2255048 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:14:36.364716 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.368526 2255048 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:14:36.368557 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:14:36.368640 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.368799 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.371211 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.374123 2255048 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:14:36.373727 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.374507 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.376914 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.376951 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.376846 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:14:36.376970 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:14:36.376991 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.377194 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.377424 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.377656 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.380757 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.381482 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.381508 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.381537 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.381783 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.381953 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.382098 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.383003 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42733
	I0911 12:14:36.383415 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.383860 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.383884 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.384174 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.384600 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.384650 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.401421 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
	I0911 12:14:36.401987 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.402660 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.402684 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.403172 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.403456 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.406003 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.406531 2255048 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:14:36.406567 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:14:36.406593 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.410520 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.411016 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.411072 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.411331 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.411517 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.411723 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.411895 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.448234 2255048 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-352076" context rescaled to 1 replicas
	I0911 12:14:36.448281 2255048 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:14:36.450615 2255048 out.go:177] * Verifying Kubernetes components...
	I0911 12:14:36.452566 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:36.600188 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 12:14:36.600187 2255048 node_ready.go:35] waiting up to 6m0s for node "no-preload-352076" to be "Ready" ...
	I0911 12:14:36.611125 2255048 node_ready.go:49] node "no-preload-352076" has status "Ready":"True"
	I0911 12:14:36.611167 2255048 node_ready.go:38] duration metric: took 10.942009ms waiting for node "no-preload-352076" to be "Ready" ...
	I0911 12:14:36.611181 2255048 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:14:36.632729 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:14:36.632759 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:14:36.640639 2255048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:36.656421 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:14:36.659146 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:14:36.711603 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:14:36.711644 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:14:36.780574 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:14:36.780614 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:14:36.874964 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:14:38.569895 2255048 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.969647165s)
	I0911 12:14:38.569949 2255048 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0911 12:14:38.569895 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.91343277s)
	I0911 12:14:38.570001 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570017 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.570428 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.570469 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.570484 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570440 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.570495 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.570786 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.570801 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.570803 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.570820 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570830 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.571133 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.571183 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.571196 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.756212 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:14:39.258501 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.599303563s)
	I0911 12:14:39.258567 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.258581 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.258631 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.383622497s)
	I0911 12:14:39.258693 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.258713 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259000 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259069 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259129 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259139 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.259040 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259150 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259154 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259165 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.259178 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259042 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259444 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259444 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259468 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259514 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259605 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259620 2255048 addons.go:467] Verifying addon metrics-server=true in "no-preload-352076"
	I0911 12:14:39.261573 2255048 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0911 12:14:39.263513 2255048 addons.go:502] enable addons completed in 2.945573816s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0911 12:14:41.194698 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:14:41.682872 2255048 pod_ready.go:92] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.682904 2255048 pod_ready.go:81] duration metric: took 5.042231142s waiting for pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.682919 2255048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.685265 2255048 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ddttk" not found
	I0911 12:14:41.685295 2255048 pod_ready.go:81] duration metric: took 2.370305ms waiting for pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace to be "Ready" ...
	E0911 12:14:41.685306 2255048 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ddttk" not found
	I0911 12:14:41.685313 2255048 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.694255 2255048 pod_ready.go:92] pod "etcd-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.694295 2255048 pod_ready.go:81] duration metric: took 8.974837ms waiting for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.694309 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.700807 2255048 pod_ready.go:92] pod "kube-apiserver-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.700854 2255048 pod_ready.go:81] duration metric: took 6.536644ms waiting for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.700869 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.707895 2255048 pod_ready.go:92] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.707918 2255048 pod_ready.go:81] duration metric: took 7.041207ms waiting for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.707930 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5w2x" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.880293 2255048 pod_ready.go:92] pod "kube-proxy-f5w2x" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.880329 2255048 pod_ready.go:81] duration metric: took 172.39121ms waiting for pod "kube-proxy-f5w2x" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.880345 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:42.280038 2255048 pod_ready.go:92] pod "kube-scheduler-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:42.280066 2255048 pod_ready.go:81] duration metric: took 399.713688ms waiting for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:42.280074 2255048 pod_ready.go:38] duration metric: took 5.668879257s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:14:42.280093 2255048 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:14:42.280143 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:14:42.303868 2255048 api_server.go:72] duration metric: took 5.855535753s to wait for apiserver process to appear ...
	I0911 12:14:42.303906 2255048 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:14:42.303927 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:14:42.310890 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
	ok
	I0911 12:14:42.313428 2255048 api_server.go:141] control plane version: v1.28.1
	I0911 12:14:42.313455 2255048 api_server.go:131] duration metric: took 9.541682ms to wait for apiserver health ...
	I0911 12:14:42.313464 2255048 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:14:42.483863 2255048 system_pods.go:59] 8 kube-system pods found
	I0911 12:14:42.483895 2255048 system_pods.go:61] "coredns-5dd5756b68-6w2w7" [fe585a8f-a92f-4497-b399-d759c995f9e6] Running
	I0911 12:14:42.483900 2255048 system_pods.go:61] "etcd-no-preload-352076" [46c6483c-1301-4aa4-9851-b66116ac5b8a] Running
	I0911 12:14:42.483905 2255048 system_pods.go:61] "kube-apiserver-no-preload-352076" [a01ff260-818b-46a3-99c2-c5e4f8fa2610] Running
	I0911 12:14:42.483909 2255048 system_pods.go:61] "kube-controller-manager-no-preload-352076" [0f697ec3-cabf-41f8-bdb0-8b39a70deafd] Running
	I0911 12:14:42.483912 2255048 system_pods.go:61] "kube-proxy-f5w2x" [03e8a2b5-aaf8-4fd7-920e-033a44729398] Running
	I0911 12:14:42.483916 2255048 system_pods.go:61] "kube-scheduler-no-preload-352076" [ddbb31d1-9bc6-4466-bd7b-81f433959677] Running
	I0911 12:14:42.483923 2255048 system_pods.go:61] "metrics-server-57f55c9bc5-r8mgg" [a54edaa0-b800-48f3-99bc-7d38adb834d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:14:42.483930 2255048 system_pods.go:61] "storage-provisioner" [c5d1acfb-fa11-4a73-9176-21aee3e2ab99] Running
	I0911 12:14:42.483936 2255048 system_pods.go:74] duration metric: took 170.467243ms to wait for pod list to return data ...
	I0911 12:14:42.483945 2255048 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:14:42.679235 2255048 default_sa.go:45] found service account: "default"
	I0911 12:14:42.679270 2255048 default_sa.go:55] duration metric: took 195.319105ms for default service account to be created ...
	I0911 12:14:42.679284 2255048 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:14:42.883048 2255048 system_pods.go:86] 8 kube-system pods found
	I0911 12:14:42.883078 2255048 system_pods.go:89] "coredns-5dd5756b68-6w2w7" [fe585a8f-a92f-4497-b399-d759c995f9e6] Running
	I0911 12:14:42.883084 2255048 system_pods.go:89] "etcd-no-preload-352076" [46c6483c-1301-4aa4-9851-b66116ac5b8a] Running
	I0911 12:14:42.883089 2255048 system_pods.go:89] "kube-apiserver-no-preload-352076" [a01ff260-818b-46a3-99c2-c5e4f8fa2610] Running
	I0911 12:14:42.883093 2255048 system_pods.go:89] "kube-controller-manager-no-preload-352076" [0f697ec3-cabf-41f8-bdb0-8b39a70deafd] Running
	I0911 12:14:42.883097 2255048 system_pods.go:89] "kube-proxy-f5w2x" [03e8a2b5-aaf8-4fd7-920e-033a44729398] Running
	I0911 12:14:42.883103 2255048 system_pods.go:89] "kube-scheduler-no-preload-352076" [ddbb31d1-9bc6-4466-bd7b-81f433959677] Running
	I0911 12:14:42.883110 2255048 system_pods.go:89] "metrics-server-57f55c9bc5-r8mgg" [a54edaa0-b800-48f3-99bc-7d38adb834d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:14:42.883118 2255048 system_pods.go:89] "storage-provisioner" [c5d1acfb-fa11-4a73-9176-21aee3e2ab99] Running
	I0911 12:14:42.883126 2255048 system_pods.go:126] duration metric: took 203.835523ms to wait for k8s-apps to be running ...
	I0911 12:14:42.883133 2255048 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:14:42.883181 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:42.897962 2255048 system_svc.go:56] duration metric: took 14.812893ms WaitForService to wait for kubelet.
	I0911 12:14:42.898000 2255048 kubeadm.go:581] duration metric: took 6.449678905s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:14:42.898022 2255048 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:14:43.080859 2255048 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:14:43.080890 2255048 node_conditions.go:123] node cpu capacity is 2
	I0911 12:14:43.080901 2255048 node_conditions.go:105] duration metric: took 182.874167ms to run NodePressure ...
	I0911 12:14:43.080913 2255048 start.go:228] waiting for startup goroutines ...
	I0911 12:14:43.080919 2255048 start.go:233] waiting for cluster config update ...
	I0911 12:14:43.080930 2255048 start.go:242] writing updated cluster config ...
	I0911 12:14:43.081223 2255048 ssh_runner.go:195] Run: rm -f paused
	I0911 12:14:43.135636 2255048 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:14:43.137835 2255048 out.go:177] * Done! kubectl is now configured to use "no-preload-352076" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 12:07:23 UTC, ends at Mon 2023-09-11 12:22:07 UTC. --
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.193412019Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b1bb7f97-9ff0-4aa4-823e-f1907868b73f name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.434198718Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6dc5638e-28d5-437d-99cf-0af1c3faf2f4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.434296346Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6dc5638e-28d5-437d-99cf-0af1c3faf2f4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.434603770Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6dc5638e-28d5-437d-99cf-0af1c3faf2f4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.473108851Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=10ff8a8f-7730-4050-9157-c74d1e489eb5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.473199549Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=10ff8a8f-7730-4050-9157-c74d1e489eb5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.473383599Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=10ff8a8f-7730-4050-9157-c74d1e489eb5 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.512578407Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=919b4153-8025-4a15-a9e6-dcf85402dbb8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.512644511Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=919b4153-8025-4a15-a9e6-dcf85402dbb8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.512813975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=919b4153-8025-4a15-a9e6-dcf85402dbb8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.558479704Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=ae56cc03-1487-44d5-aefa-3faf04e37332 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.558578119Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=ae56cc03-1487-44d5-aefa-3faf04e37332 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.558801639Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=ae56cc03-1487-44d5-aefa-3faf04e37332 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.612113512Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=19271237-e443-45e1-bb14-b65fe4d569a9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.612211916Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=19271237-e443-45e1-bb14-b65fe4d569a9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.612530420Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=19271237-e443-45e1-bb14-b65fe4d569a9 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.651379235Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=e0c99286-c15d-4d71-acea-ffad35f851b3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.651602074Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=e0c99286-c15d-4d71-acea-ffad35f851b3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.651842066Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=e0c99286-c15d-4d71-acea-ffad35f851b3 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.692587922Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=aa215f23-6251-4de5-8ec1-4217f088d95f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.692701836Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=aa215f23-6251-4de5-8ec1-4217f088d95f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.692888184Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=aa215f23-6251-4de5-8ec1-4217f088d95f name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.727571093Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=eff705a4-9919-45ce-9822-e46b9c82fccc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.727662147Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=eff705a4-9919-45ce-9822-e46b9c82fccc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:07 embed-certs-235462 crio[717]: time="2023-09-11 12:22:07.727830431Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=eff705a4-9919-45ce-9822-e46b9c82fccc name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	e81fbe6b94d58       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   2e6607472212b
	b795df7f42a7c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   36da79fad0977
	2bfb96d3e2a49       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   9 minutes ago       Running             kube-proxy                0                   1999a19e8956a
	0ac50f64245d9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   6a200efd589fd
	3fde1e3e93d68       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   9 minutes ago       Running             kube-controller-manager   2                   17d58a640b326
	738708a4c7cb1       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   9 minutes ago       Running             kube-scheduler            2                   afb23c8282c99
	2ba4ad4b835e5       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   9 minutes ago       Running             kube-apiserver            2                   dbca37a0722d3
	
	* 
	* ==> coredns [b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59226 - 44552 "HINFO IN 2336702580251102645.7041884445010550068. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009525966s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-235462
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-235462
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=embed-certs-235462
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T12_12_49_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 12:12:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-235462
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 12:21:59 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 12:18:16 +0000   Mon, 11 Sep 2023 12:12:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 12:18:16 +0000   Mon, 11 Sep 2023 12:12:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 12:18:16 +0000   Mon, 11 Sep 2023 12:12:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 12:18:16 +0000   Mon, 11 Sep 2023 12:12:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.96
	  Hostname:    embed-certs-235462
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 d37f3e78025c49b7a144561b8b7550e8
	  System UUID:                d37f3e78-025c-49b7-a144-561b8b7550e8
	  Boot ID:                    1932a667-69e9-491f-b94b-5fa920cc9eb9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-hzq9f                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 etcd-embed-certs-235462                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-apiserver-embed-certs-235462             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-controller-manager-embed-certs-235462    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m19s
	  kube-system                 kube-proxy-zlcth                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-235462             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-57f55c9bc5-qbrf2               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m5s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m5s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m3s                   kube-proxy       
	  Normal  NodeHasSufficientMemory  9m29s (x8 over 9m29s)  kubelet          Node embed-certs-235462 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m29s (x8 over 9m29s)  kubelet          Node embed-certs-235462 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m29s (x7 over 9m29s)  kubelet          Node embed-certs-235462 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m29s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node embed-certs-235462 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node embed-certs-235462 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node embed-certs-235462 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m19s                  kubelet          Node embed-certs-235462 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m9s                   node-controller  Node embed-certs-235462 event: Registered Node embed-certs-235462 in Controller
	  Normal  NodeReady                9m9s                   kubelet          Node embed-certs-235462 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Sep11 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.094011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.722474] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.741675] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.155274] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.448189] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.199419] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.113017] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.162353] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.117645] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.237402] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +17.304865] systemd-fstab-generator[916]: Ignoring "noauto" for root device
	[Sep11 12:08] kauditd_printk_skb: 29 callbacks suppressed
	[Sep11 12:12] systemd-fstab-generator[3576]: Ignoring "noauto" for root device
	[  +9.855657] systemd-fstab-generator[3899]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b] <==
	* {"level":"info","ts":"2023-09-11T12:12:42.28415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe switched to configuration voters=(319442344736368894)"}
	{"level":"info","ts":"2023-09-11T12:12:42.284392Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa78aab20fdf43c2","local-member-id":"46ee31ebc3aa8fe","added-peer-id":"46ee31ebc3aa8fe","added-peer-peer-urls":["https://192.168.50.96:2380"]}
	{"level":"info","ts":"2023-09-11T12:12:42.292299Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T12:12:42.292614Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.96:2380"}
	{"level":"info","ts":"2023-09-11T12:12:42.294772Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.96:2380"}
	{"level":"info","ts":"2023-09-11T12:12:42.296031Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-11T12:12:42.29596Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"46ee31ebc3aa8fe","initial-advertise-peer-urls":["https://192.168.50.96:2380"],"listen-peer-urls":["https://192.168.50.96:2380"],"advertise-client-urls":["https://192.168.50.96:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.96:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T12:12:42.632327Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-11T12:12:42.632606Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-11T12:12:42.632783Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe received MsgPreVoteResp from 46ee31ebc3aa8fe at term 1"}
	{"level":"info","ts":"2023-09-11T12:12:42.632891Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe became candidate at term 2"}
	{"level":"info","ts":"2023-09-11T12:12:42.632951Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe received MsgVoteResp from 46ee31ebc3aa8fe at term 2"}
	{"level":"info","ts":"2023-09-11T12:12:42.633039Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"46ee31ebc3aa8fe became leader at term 2"}
	{"level":"info","ts":"2023-09-11T12:12:42.633099Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 46ee31ebc3aa8fe elected leader 46ee31ebc3aa8fe at term 2"}
	{"level":"info","ts":"2023-09-11T12:12:42.637415Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:12:42.638163Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"46ee31ebc3aa8fe","local-member-attributes":"{Name:embed-certs-235462 ClientURLs:[https://192.168.50.96:2379]}","request-path":"/0/members/46ee31ebc3aa8fe/attributes","cluster-id":"fa78aab20fdf43c2","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T12:12:42.638185Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T12:12:42.638407Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T12:12:42.657301Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T12:12:42.657554Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa78aab20fdf43c2","local-member-id":"46ee31ebc3aa8fe","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:12:42.657669Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:12:42.657703Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:12:42.657883Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T12:12:42.658081Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T12:12:42.684196Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.96:2379"}
	
	* 
	* ==> kernel <==
	*  12:22:08 up 14 min,  0 users,  load average: 0.10, 0.18, 0.16
	Linux embed-certs-235462 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7] <==
	* W0911 12:17:46.239872       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:17:46.239994       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:17:46.241342       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:18:45.120168       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.170.213:443: connect: connection refused
	I0911 12:18:45.120246       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:18:46.240571       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:18:46.240831       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:18:46.240869       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 12:18:46.241789       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:18:46.241878       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:18:46.241890       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:19:45.120087       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.170.213:443: connect: connection refused
	I0911 12:19:45.120199       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0911 12:20:45.120546       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.170.213:443: connect: connection refused
	I0911 12:20:45.120788       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:20:46.241581       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:20:46.241772       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:20:46.241828       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 12:20:46.241971       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:20:46.242084       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:20:46.243811       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:21:45.120649       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.170.213:443: connect: connection refused
	I0911 12:21:45.120892       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713] <==
	* I0911 12:16:36.091795       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="216.338µs"
	E0911 12:17:00.091941       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:17:00.513280       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:17:30.099340       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:17:30.526712       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:18:00.107369       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:18:00.537358       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:18:30.117496       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:18:30.548500       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:19:00.125586       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:19:00.560993       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0911 12:19:15.100076       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="382.22µs"
	I0911 12:19:29.092344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="190.726µs"
	E0911 12:19:30.133902       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:19:30.570987       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:20:00.141106       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:20:00.583537       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:20:30.147521       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:20:30.594683       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:21:00.154909       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:21:00.607286       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:21:30.162190       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:21:30.617116       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:22:00.170552       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:22:00.627376       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a] <==
	* I0911 12:13:04.886046       1 server_others.go:69] "Using iptables proxy"
	I0911 12:13:04.908867       1 node.go:141] Successfully retrieved node IP: 192.168.50.96
	I0911 12:13:04.979517       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 12:13:04.979594       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 12:13:04.982383       1 server_others.go:152] "Using iptables Proxier"
	I0911 12:13:04.982572       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 12:13:04.982874       1 server.go:846] "Version info" version="v1.28.1"
	I0911 12:13:04.983314       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 12:13:04.985550       1 config.go:188] "Starting service config controller"
	I0911 12:13:04.985663       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 12:13:04.985790       1 config.go:97] "Starting endpoint slice config controller"
	I0911 12:13:04.985925       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 12:13:04.989012       1 config.go:315] "Starting node config controller"
	I0911 12:13:04.989119       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 12:13:05.086951       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 12:13:05.087027       1 shared_informer.go:318] Caches are synced for service config
	I0911 12:13:05.091597       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34] <==
	* W0911 12:12:45.267805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 12:12:45.267841       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0911 12:12:45.269853       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 12:12:45.269903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 12:12:46.126632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 12:12:46.126749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0911 12:12:46.225087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 12:12:46.225145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0911 12:12:46.250503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 12:12:46.250567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0911 12:12:46.395717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 12:12:46.395782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0911 12:12:46.402979       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 12:12:46.403145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0911 12:12:46.420524       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0911 12:12:46.420668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0911 12:12:46.544212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 12:12:46.544270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0911 12:12:46.551823       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0911 12:12:46.551903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0911 12:12:46.575998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 12:12:46.576240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0911 12:12:46.787380       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 12:12:46.787565       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0911 12:12:50.052409       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 12:07:23 UTC, ends at Mon 2023-09-11 12:22:08 UTC. --
	Sep 11 12:19:29 embed-certs-235462 kubelet[3905]: E0911 12:19:29.070939    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:19:44 embed-certs-235462 kubelet[3905]: E0911 12:19:44.070054    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:19:49 embed-certs-235462 kubelet[3905]: E0911 12:19:49.152270    3905 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:19:49 embed-certs-235462 kubelet[3905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:19:49 embed-certs-235462 kubelet[3905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:19:49 embed-certs-235462 kubelet[3905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:19:56 embed-certs-235462 kubelet[3905]: E0911 12:19:56.070501    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:20:07 embed-certs-235462 kubelet[3905]: E0911 12:20:07.072957    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:20:19 embed-certs-235462 kubelet[3905]: E0911 12:20:19.070291    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:20:33 embed-certs-235462 kubelet[3905]: E0911 12:20:33.070742    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:20:44 embed-certs-235462 kubelet[3905]: E0911 12:20:44.070286    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:20:49 embed-certs-235462 kubelet[3905]: E0911 12:20:49.152860    3905 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:20:49 embed-certs-235462 kubelet[3905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:20:49 embed-certs-235462 kubelet[3905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:20:49 embed-certs-235462 kubelet[3905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:20:59 embed-certs-235462 kubelet[3905]: E0911 12:20:59.070632    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:21:12 embed-certs-235462 kubelet[3905]: E0911 12:21:12.070363    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:21:23 embed-certs-235462 kubelet[3905]: E0911 12:21:23.070800    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:21:35 embed-certs-235462 kubelet[3905]: E0911 12:21:35.070087    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:21:47 embed-certs-235462 kubelet[3905]: E0911 12:21:47.070694    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:21:49 embed-certs-235462 kubelet[3905]: E0911 12:21:49.154100    3905 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:21:49 embed-certs-235462 kubelet[3905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:21:49 embed-certs-235462 kubelet[3905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:21:49 embed-certs-235462 kubelet[3905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:21:58 embed-certs-235462 kubelet[3905]: E0911 12:21:58.070561    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	
	* 
	* ==> storage-provisioner [e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429] <==
	* I0911 12:13:05.280736       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 12:13:05.302612       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 12:13:05.302848       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 12:13:05.346181       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 12:13:05.348155       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-235462_9e61034f-8d48-4595-9ef5-6f168482312d!
	I0911 12:13:05.350633       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9c314536-3286-4153-950f-1093a98f838f", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-235462_9e61034f-8d48-4595-9ef5-6f168482312d became leader
	I0911 12:13:05.449607       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-235462_9e61034f-8d48-4595-9ef5-6f168482312d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-235462 -n embed-certs-235462
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-235462 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-qbrf2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-235462 describe pod metrics-server-57f55c9bc5-qbrf2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-235462 describe pod metrics-server-57f55c9bc5-qbrf2: exit status 1 (73.406087ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-qbrf2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-235462 describe pod metrics-server-57f55c9bc5-qbrf2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (543.40s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.34s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0911 12:13:47.569711 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 12:14:15.052946 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-11 12:22:12.312269842 +0000 UTC m=+5137.624894729
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-484027 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-484027 logs -n 25: (1.671807486s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 11:57 UTC | 11 Sep 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| ssh     | cert-options-559775 ssh                                | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-559775 -- sudo                         | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559775                                 | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-352076             | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 11:59 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-235462            | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642215        | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:01 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-226537 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	|         | disable-driver-mounts-226537                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:02 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-352076                  | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-484027  | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-235462                 | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642215             | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-484027       | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC | 11 Sep 23 12:13 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 12:04:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 12:04:58.034724 2255814 out.go:296] Setting OutFile to fd 1 ...
	I0911 12:04:58.034920 2255814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:04:58.034929 2255814 out.go:309] Setting ErrFile to fd 2...
	I0911 12:04:58.034933 2255814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:04:58.035102 2255814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 12:04:58.035709 2255814 out.go:303] Setting JSON to false
	I0911 12:04:58.036651 2255814 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":236849,"bootTime":1694197049,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 12:04:58.036727 2255814 start.go:138] virtualization: kvm guest
	I0911 12:04:58.039239 2255814 out.go:177] * [default-k8s-diff-port-484027] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 12:04:58.041110 2255814 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 12:04:58.041181 2255814 notify.go:220] Checking for updates...
	I0911 12:04:58.042795 2255814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 12:04:58.044550 2255814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:04:58.046047 2255814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 12:04:58.047718 2255814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 12:04:58.049343 2255814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 12:04:58.051545 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:04:58.051956 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:04:58.052047 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:04:58.068212 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0911 12:04:58.068649 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:04:58.069289 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:04:58.069318 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:04:58.069763 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:04:58.069987 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:04:58.070276 2255814 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 12:04:58.070629 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:04:58.070670 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:04:58.085941 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0911 12:04:58.086461 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:04:58.086966 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:04:58.086995 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:04:58.087337 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:04:58.087522 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:04:58.126206 2255814 out.go:177] * Using the kvm2 driver based on existing profile
	I0911 12:04:58.127558 2255814 start.go:298] selected driver: kvm2
	I0911 12:04:58.127571 2255814 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:04:58.127716 2255814 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 12:04:58.128430 2255814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:04:58.128555 2255814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 12:04:58.144660 2255814 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 12:04:58.145091 2255814 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 12:04:58.145145 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:04:58.145159 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:04:58.145176 2255814 start_flags.go:321] config:
	{Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-48402
7 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:04:58.145377 2255814 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:04:58.147634 2255814 out.go:177] * Starting control plane node default-k8s-diff-port-484027 in cluster default-k8s-diff-port-484027
	I0911 12:04:56.741109 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:04:58.149438 2255814 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:04:58.149510 2255814 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 12:04:58.149543 2255814 cache.go:57] Caching tarball of preloaded images
	I0911 12:04:58.149650 2255814 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 12:04:58.149664 2255814 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 12:04:58.149825 2255814 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/config.json ...
	I0911 12:04:58.150070 2255814 start.go:365] acquiring machines lock for default-k8s-diff-port-484027: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:04:59.813165 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:05.893188 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:08.965171 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:15.045168 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:18.117188 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:24.197148 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:27.269089 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:33.349151 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:36.421191 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:42.501129 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:45.573209 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:51.653159 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:54.725153 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:00.805201 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:03.877105 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:09.957136 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:13.029119 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:19.109157 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:22.181096 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:28.261156 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:31.333179 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:37.413187 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:40.485239 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:46.565193 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:49.637182 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:55.717194 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:58.789181 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:04.869137 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:07.941096 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:10.946790 2255187 start.go:369] acquired machines lock for "embed-certs-235462" in 4m28.227506413s
	I0911 12:07:10.946859 2255187 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:10.946884 2255187 fix.go:54] fixHost starting: 
	I0911 12:07:10.947279 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:10.947318 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:10.963823 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43713
	I0911 12:07:10.964352 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:10.965050 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:07:10.965086 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:10.965556 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:10.965804 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:10.965995 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:07:10.967759 2255187 fix.go:102] recreateIfNeeded on embed-certs-235462: state=Stopped err=<nil>
	I0911 12:07:10.967790 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	W0911 12:07:10.968000 2255187 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:10.970103 2255187 out.go:177] * Restarting existing kvm2 VM for "embed-certs-235462" ...
	I0911 12:07:10.971879 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Start
	I0911 12:07:10.972130 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring networks are active...
	I0911 12:07:10.973084 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring network default is active
	I0911 12:07:10.973424 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring network mk-embed-certs-235462 is active
	I0911 12:07:10.973888 2255187 main.go:141] libmachine: (embed-certs-235462) Getting domain xml...
	I0911 12:07:10.974726 2255187 main.go:141] libmachine: (embed-certs-235462) Creating domain...
	I0911 12:07:12.246736 2255187 main.go:141] libmachine: (embed-certs-235462) Waiting to get IP...
	I0911 12:07:12.247648 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.248019 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.248152 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.248016 2256167 retry.go:31] will retry after 245.040457ms: waiting for machine to come up
	I0911 12:07:12.494788 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.495311 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.495345 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.495247 2256167 retry.go:31] will retry after 312.634812ms: waiting for machine to come up
	I0911 12:07:10.943345 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:10.943403 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:07:10.946565 2255048 machine.go:91] provisioned docker machine in 4m37.405921901s
	I0911 12:07:10.946641 2255048 fix.go:56] fixHost completed within 4m37.430192233s
	I0911 12:07:10.946648 2255048 start.go:83] releasing machines lock for "no-preload-352076", held for 4m37.430236677s
	W0911 12:07:10.946673 2255048 start.go:672] error starting host: provision: host is not running
	W0911 12:07:10.946819 2255048 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0911 12:07:10.946833 2255048 start.go:687] Will try again in 5 seconds ...
	I0911 12:07:12.810038 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.810461 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.810496 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.810398 2256167 retry.go:31] will retry after 478.036066ms: waiting for machine to come up
	I0911 12:07:13.290252 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:13.290701 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:13.290731 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:13.290646 2256167 retry.go:31] will retry after 576.124591ms: waiting for machine to come up
	I0911 12:07:13.868555 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:13.868978 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:13.869004 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:13.868931 2256167 retry.go:31] will retry after 487.107859ms: waiting for machine to come up
	I0911 12:07:14.357765 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:14.358240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:14.358315 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:14.358173 2256167 retry.go:31] will retry after 903.857312ms: waiting for machine to come up
	I0911 12:07:15.263350 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:15.263852 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:15.263908 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:15.263777 2256167 retry.go:31] will retry after 830.555039ms: waiting for machine to come up
	I0911 12:07:16.096337 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:16.096743 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:16.096774 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:16.096696 2256167 retry.go:31] will retry after 1.307188723s: waiting for machine to come up
	I0911 12:07:17.406129 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:17.406558 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:17.406584 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:17.406512 2256167 retry.go:31] will retry after 1.681887732s: waiting for machine to come up
	I0911 12:07:15.947503 2255048 start.go:365] acquiring machines lock for no-preload-352076: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:07:19.090590 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:19.091013 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:19.091038 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:19.090965 2256167 retry.go:31] will retry after 2.013298988s: waiting for machine to come up
	I0911 12:07:21.105851 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:21.106384 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:21.106418 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:21.106319 2256167 retry.go:31] will retry after 2.714578164s: waiting for machine to come up
	I0911 12:07:23.823181 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:23.823687 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:23.823772 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:23.823623 2256167 retry.go:31] will retry after 2.321779277s: waiting for machine to come up
	I0911 12:07:26.147527 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:26.147956 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:26.147986 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:26.147896 2256167 retry.go:31] will retry after 4.307300197s: waiting for machine to come up
	I0911 12:07:31.786165 2255304 start.go:369] acquired machines lock for "old-k8s-version-642215" in 4m38.564304718s
	I0911 12:07:31.786239 2255304 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:31.786261 2255304 fix.go:54] fixHost starting: 
	I0911 12:07:31.786754 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:31.786809 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:31.806853 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0911 12:07:31.807320 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:31.807871 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:07:31.807906 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:31.808284 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:31.808473 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:31.808622 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:07:31.810311 2255304 fix.go:102] recreateIfNeeded on old-k8s-version-642215: state=Stopped err=<nil>
	I0911 12:07:31.810345 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	W0911 12:07:31.810524 2255304 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:31.813302 2255304 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-642215" ...
	I0911 12:07:30.458075 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.458554 2255187 main.go:141] libmachine: (embed-certs-235462) Found IP for machine: 192.168.50.96
	I0911 12:07:30.458579 2255187 main.go:141] libmachine: (embed-certs-235462) Reserving static IP address...
	I0911 12:07:30.458593 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has current primary IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.459036 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "embed-certs-235462", mac: "52:54:00:2b:a0:6e", ip: "192.168.50.96"} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.459066 2255187 main.go:141] libmachine: (embed-certs-235462) Reserved static IP address: 192.168.50.96
	I0911 12:07:30.459088 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | skip adding static IP to network mk-embed-certs-235462 - found existing host DHCP lease matching {name: "embed-certs-235462", mac: "52:54:00:2b:a0:6e", ip: "192.168.50.96"}
	I0911 12:07:30.459104 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Getting to WaitForSSH function...
	I0911 12:07:30.459117 2255187 main.go:141] libmachine: (embed-certs-235462) Waiting for SSH to be available...
	I0911 12:07:30.461594 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.461938 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.461979 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.462087 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Using SSH client type: external
	I0911 12:07:30.462109 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa (-rw-------)
	I0911 12:07:30.462146 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:07:30.462165 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | About to run SSH command:
	I0911 12:07:30.462200 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | exit 0
	I0911 12:07:30.556976 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | SSH cmd err, output: <nil>: 
	I0911 12:07:30.557370 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetConfigRaw
	I0911 12:07:30.558054 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:30.560898 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.561254 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.561292 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.561638 2255187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/config.json ...
	I0911 12:07:30.561863 2255187 machine.go:88] provisioning docker machine ...
	I0911 12:07:30.561885 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:30.562128 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.562296 2255187 buildroot.go:166] provisioning hostname "embed-certs-235462"
	I0911 12:07:30.562315 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.562497 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.565095 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.565484 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.565519 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.565682 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:30.565852 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.566021 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.566126 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:30.566273 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:30.566796 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:30.566814 2255187 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-235462 && echo "embed-certs-235462" | sudo tee /etc/hostname
	I0911 12:07:30.706262 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-235462
	
	I0911 12:07:30.706294 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.709499 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.709822 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.709862 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.710067 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:30.710331 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.710598 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.710762 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:30.710986 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:30.711479 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:30.711503 2255187 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-235462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-235462/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-235462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:07:30.850084 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:30.850120 2255187 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:07:30.850141 2255187 buildroot.go:174] setting up certificates
	I0911 12:07:30.850155 2255187 provision.go:83] configureAuth start
	I0911 12:07:30.850168 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.850494 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:30.853326 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.853650 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.853680 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.853864 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.856233 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.856574 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.856639 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.856755 2255187 provision.go:138] copyHostCerts
	I0911 12:07:30.856844 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:07:30.856859 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:07:30.856933 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:07:30.857039 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:07:30.857050 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:07:30.857078 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:07:30.857143 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:07:30.857150 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:07:30.857170 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:07:30.857217 2255187 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.embed-certs-235462 san=[192.168.50.96 192.168.50.96 localhost 127.0.0.1 minikube embed-certs-235462]
	I0911 12:07:30.996533 2255187 provision.go:172] copyRemoteCerts
	I0911 12:07:30.996607 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:07:30.996643 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.999950 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.000333 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.000370 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.000514 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.000787 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.000978 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.001133 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.095524 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 12:07:31.121456 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:07:31.145813 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0911 12:07:31.171621 2255187 provision.go:86] duration metric: configureAuth took 321.448095ms
	I0911 12:07:31.171657 2255187 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:07:31.171880 2255187 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:07:31.171975 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.175276 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.175783 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.175819 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.176082 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.176356 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.176535 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.176724 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.177014 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:31.177500 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:31.177521 2255187 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:07:31.514064 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:07:31.514090 2255187 machine.go:91] provisioned docker machine in 952.213137ms
	I0911 12:07:31.514101 2255187 start.go:300] post-start starting for "embed-certs-235462" (driver="kvm2")
	I0911 12:07:31.514135 2255187 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:07:31.514188 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.514651 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:07:31.514705 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.517108 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.517563 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.517599 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.517819 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.518053 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.518243 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.518426 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.612293 2255187 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:07:31.616991 2255187 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:07:31.617022 2255187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:07:31.617143 2255187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:07:31.617263 2255187 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:07:31.617393 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:07:31.627725 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:31.652196 2255187 start.go:303] post-start completed in 138.067305ms
	I0911 12:07:31.652232 2255187 fix.go:56] fixHost completed within 20.705348144s
	I0911 12:07:31.652264 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.655234 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.655598 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.655633 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.655810 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.656000 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.656236 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.656373 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.656547 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:31.657061 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:31.657078 2255187 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:07:31.785981 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434051.730508911
	
	I0911 12:07:31.786019 2255187 fix.go:206] guest clock: 1694434051.730508911
	I0911 12:07:31.786029 2255187 fix.go:219] Guest: 2023-09-11 12:07:31.730508911 +0000 UTC Remote: 2023-09-11 12:07:31.65223725 +0000 UTC m=+289.079171252 (delta=78.271661ms)
	I0911 12:07:31.786076 2255187 fix.go:190] guest clock delta is within tolerance: 78.271661ms
	I0911 12:07:31.786082 2255187 start.go:83] releasing machines lock for "embed-certs-235462", held for 20.839248295s
	I0911 12:07:31.786115 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.786440 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:31.789427 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.789809 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.789844 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.790024 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.790717 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.790954 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.791062 2255187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:07:31.791130 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.791177 2255187 ssh_runner.go:195] Run: cat /version.json
	I0911 12:07:31.791208 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.793991 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794359 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.794393 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794414 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794669 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.794879 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.794871 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.794913 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.795104 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.795112 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.795289 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.795291 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.795468 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.795585 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.910483 2255187 ssh_runner.go:195] Run: systemctl --version
	I0911 12:07:31.916739 2255187 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:07:32.059621 2255187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:07:32.066857 2255187 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:07:32.066955 2255187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:07:32.084365 2255187 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:07:32.084401 2255187 start.go:466] detecting cgroup driver to use...
	I0911 12:07:32.084518 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:07:32.098782 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:07:32.111344 2255187 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:07:32.111421 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:07:32.124323 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:07:32.137910 2255187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:07:32.244478 2255187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:07:32.374160 2255187 docker.go:212] disabling docker service ...
	I0911 12:07:32.374262 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:07:32.387762 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:07:32.401120 2255187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:07:32.522150 2255187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:07:31.815672 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Start
	I0911 12:07:31.815900 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring networks are active...
	I0911 12:07:31.816771 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring network default is active
	I0911 12:07:31.817161 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring network mk-old-k8s-version-642215 is active
	I0911 12:07:31.817559 2255304 main.go:141] libmachine: (old-k8s-version-642215) Getting domain xml...
	I0911 12:07:31.818275 2255304 main.go:141] libmachine: (old-k8s-version-642215) Creating domain...
	I0911 12:07:32.639647 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:07:32.658106 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:07:32.677573 2255187 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:07:32.677658 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.687407 2255187 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:07:32.687499 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.697706 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.707515 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.718090 2255187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:07:32.728668 2255187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:07:32.737652 2255187 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:07:32.737732 2255187 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:07:32.751510 2255187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:07:32.760774 2255187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:07:32.881718 2255187 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:07:33.064736 2255187 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:07:33.064859 2255187 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:07:33.071112 2255187 start.go:534] Will wait 60s for crictl version
	I0911 12:07:33.071195 2255187 ssh_runner.go:195] Run: which crictl
	I0911 12:07:33.075202 2255187 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:07:33.111795 2255187 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:07:33.111898 2255187 ssh_runner.go:195] Run: crio --version
	I0911 12:07:33.162455 2255187 ssh_runner.go:195] Run: crio --version
	I0911 12:07:33.224538 2255187 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:07:33.226156 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:33.229640 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:33.230164 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:33.230202 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:33.230434 2255187 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0911 12:07:33.235232 2255187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:33.248016 2255187 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:07:33.248094 2255187 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:33.290506 2255187 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:07:33.290594 2255187 ssh_runner.go:195] Run: which lz4
	I0911 12:07:33.294802 2255187 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:07:33.299115 2255187 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:07:33.299169 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:07:35.241115 2255187 crio.go:444] Took 1.946355 seconds to copy over tarball
	I0911 12:07:35.241211 2255187 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:07:33.131519 2255304 main.go:141] libmachine: (old-k8s-version-642215) Waiting to get IP...
	I0911 12:07:33.132551 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.133144 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.133255 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.133123 2256281 retry.go:31] will retry after 206.885556ms: waiting for machine to come up
	I0911 12:07:33.341966 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.342472 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.342495 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.342420 2256281 retry.go:31] will retry after 235.74047ms: waiting for machine to come up
	I0911 12:07:33.580161 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.580683 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.580720 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.580644 2256281 retry.go:31] will retry after 407.752379ms: waiting for machine to come up
	I0911 12:07:33.990505 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.991033 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.991099 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.991019 2256281 retry.go:31] will retry after 579.085044ms: waiting for machine to come up
	I0911 12:07:34.571958 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:34.572419 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:34.572451 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:34.572371 2256281 retry.go:31] will retry after 584.464544ms: waiting for machine to come up
	I0911 12:07:35.158152 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:35.158644 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:35.158677 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:35.158579 2256281 retry.go:31] will retry after 750.2868ms: waiting for machine to come up
	I0911 12:07:35.910364 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:35.910949 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:35.910983 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:35.910887 2256281 retry.go:31] will retry after 981.989906ms: waiting for machine to come up
	I0911 12:07:36.894691 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:36.895196 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:36.895233 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:36.895151 2256281 retry.go:31] will retry after 1.082443232s: waiting for machine to come up
	I0911 12:07:37.979265 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:37.979773 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:37.979802 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:37.979699 2256281 retry.go:31] will retry after 1.429811083s: waiting for machine to come up
	I0911 12:07:38.272328 2255187 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.031081597s)
	I0911 12:07:38.272378 2255187 crio.go:451] Took 3.031222 seconds to extract the tarball
	I0911 12:07:38.272392 2255187 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:07:38.314797 2255187 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:38.363925 2255187 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:07:38.363956 2255187 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:07:38.364034 2255187 ssh_runner.go:195] Run: crio config
	I0911 12:07:38.433884 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:07:38.433915 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:07:38.433941 2255187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:07:38.433969 2255187 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.96 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-235462 NodeName:embed-certs-235462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:07:38.434156 2255187 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-235462"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:07:38.434250 2255187 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-235462 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-235462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:07:38.434339 2255187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:07:38.447171 2255187 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:07:38.447273 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:07:38.459426 2255187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0911 12:07:38.478081 2255187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:07:38.495571 2255187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0911 12:07:38.514602 2255187 ssh_runner.go:195] Run: grep 192.168.50.96	control-plane.minikube.internal$ /etc/hosts
	I0911 12:07:38.518616 2255187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:38.531178 2255187 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462 for IP: 192.168.50.96
	I0911 12:07:38.531246 2255187 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:07:38.531410 2255187 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:07:38.531471 2255187 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:07:38.531565 2255187 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/client.key
	I0911 12:07:38.531650 2255187 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.key.8e4e34e1
	I0911 12:07:38.531705 2255187 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.key
	I0911 12:07:38.531860 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:07:38.531918 2255187 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:07:38.531933 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:07:38.531976 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:07:38.532020 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:07:38.532071 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:07:38.532140 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:38.532870 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:07:38.558426 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0911 12:07:38.582526 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:07:38.606798 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:07:38.630691 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:07:38.655580 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:07:38.682355 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:07:38.707701 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:07:38.732346 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:07:38.757688 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:07:38.783458 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:07:38.808481 2255187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:07:38.825822 2255187 ssh_runner.go:195] Run: openssl version
	I0911 12:07:38.831897 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:07:38.842170 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.847385 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.847467 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.853456 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:07:38.864049 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:07:38.874236 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.879391 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.879463 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.885352 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:07:38.895225 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:07:38.905599 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.910660 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.910748 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.916920 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:07:38.927096 2255187 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:07:38.932313 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:07:38.939081 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:07:38.946028 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:07:38.952644 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:07:38.959391 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:07:38.965871 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:07:38.972698 2255187 kubeadm.go:404] StartCluster: {Name:embed-certs-235462 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-235462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:07:38.972838 2255187 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:07:38.972906 2255187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:07:39.006683 2255187 cri.go:89] found id: ""
	I0911 12:07:39.006780 2255187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:07:39.017143 2255187 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:07:39.017173 2255187 kubeadm.go:636] restartCluster start
	I0911 12:07:39.017256 2255187 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:07:39.029483 2255187 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.031111 2255187 kubeconfig.go:92] found "embed-certs-235462" server: "https://192.168.50.96:8443"
	I0911 12:07:39.034708 2255187 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:07:39.046851 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.046919 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.058732 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.058756 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.058816 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.070011 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.570811 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.570945 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.583538 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:40.071137 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:40.071264 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:40.083997 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:40.570532 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:40.570646 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:40.583202 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:41.070241 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:41.070369 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:41.082992 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:41.570284 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:41.570420 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:41.582669 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:42.070231 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:42.070341 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:42.086964 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:42.570487 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:42.570592 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:42.582618 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.411715 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:39.412168 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:39.412203 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:39.412129 2256281 retry.go:31] will retry after 2.048771803s: waiting for machine to come up
	I0911 12:07:41.463672 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:41.464124 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:41.464160 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:41.464061 2256281 retry.go:31] will retry after 2.459765131s: waiting for machine to come up
	I0911 12:07:43.071070 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:43.071249 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:43.087309 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:43.570993 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:43.571105 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:43.586884 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:44.070402 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:44.070525 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:44.082541 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:44.571170 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:44.571303 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:44.583295 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:45.070902 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:45.071002 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:45.087666 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:45.570274 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:45.570400 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:45.587352 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:46.070596 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:46.070729 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:46.082939 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:46.570445 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:46.570559 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:46.582782 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:47.070351 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:47.070485 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:47.082518 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:47.571060 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:47.571155 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:47.583891 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:43.926561 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:43.926941 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:43.926983 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:43.926918 2256281 retry.go:31] will retry after 2.467825155s: waiting for machine to come up
	I0911 12:07:46.396258 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:46.396703 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:46.396736 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:46.396622 2256281 retry.go:31] will retry after 3.885293775s: waiting for machine to come up
	I0911 12:07:48.070904 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:48.070994 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:48.083706 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:48.570268 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:48.570404 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:48.582255 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:49.047880 2255187 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:07:49.047929 2255187 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:07:49.047951 2255187 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:07:49.048052 2255187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:07:49.081907 2255187 cri.go:89] found id: ""
	I0911 12:07:49.082024 2255187 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:07:49.099563 2255187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:07:49.109373 2255187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:07:49.109450 2255187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:07:49.119162 2255187 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:07:49.119210 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:49.251091 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:49.995928 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.192421 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.288496 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.365849 2255187 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:07:50.365943 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.383262 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.901757 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:51.401967 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:51.901613 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:52.402067 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.285991 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:50.286515 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:50.286547 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:50.286433 2256281 retry.go:31] will retry after 3.948880306s: waiting for machine to come up
	I0911 12:07:55.614569 2255814 start.go:369] acquired machines lock for "default-k8s-diff-port-484027" in 2m57.464444695s
	I0911 12:07:55.614642 2255814 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:55.614662 2255814 fix.go:54] fixHost starting: 
	I0911 12:07:55.615164 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:55.615208 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:55.635996 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I0911 12:07:55.636556 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:55.637268 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:07:55.637295 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:55.637758 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:55.638000 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:07:55.638191 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:07:55.640059 2255814 fix.go:102] recreateIfNeeded on default-k8s-diff-port-484027: state=Stopped err=<nil>
	I0911 12:07:55.640086 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	W0911 12:07:55.640254 2255814 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:55.643100 2255814 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-484027" ...
	I0911 12:07:54.236661 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.237200 2255304 main.go:141] libmachine: (old-k8s-version-642215) Found IP for machine: 192.168.61.58
	I0911 12:07:54.237226 2255304 main.go:141] libmachine: (old-k8s-version-642215) Reserving static IP address...
	I0911 12:07:54.237241 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has current primary IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.237676 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "old-k8s-version-642215", mac: "52:54:00:4e:60:8b", ip: "192.168.61.58"} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.237717 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | skip adding static IP to network mk-old-k8s-version-642215 - found existing host DHCP lease matching {name: "old-k8s-version-642215", mac: "52:54:00:4e:60:8b", ip: "192.168.61.58"}
	I0911 12:07:54.237736 2255304 main.go:141] libmachine: (old-k8s-version-642215) Reserved static IP address: 192.168.61.58
	I0911 12:07:54.237756 2255304 main.go:141] libmachine: (old-k8s-version-642215) Waiting for SSH to be available...
	I0911 12:07:54.237773 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Getting to WaitForSSH function...
	I0911 12:07:54.240007 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.240469 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.240521 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.240610 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Using SSH client type: external
	I0911 12:07:54.240642 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa (-rw-------)
	I0911 12:07:54.240679 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:07:54.240700 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | About to run SSH command:
	I0911 12:07:54.240715 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | exit 0
	I0911 12:07:54.337416 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | SSH cmd err, output: <nil>: 
	I0911 12:07:54.337857 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetConfigRaw
	I0911 12:07:54.338666 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:54.341640 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.341973 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.342025 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.342296 2255304 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/config.json ...
	I0911 12:07:54.342549 2255304 machine.go:88] provisioning docker machine ...
	I0911 12:07:54.342573 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:54.342809 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.342965 2255304 buildroot.go:166] provisioning hostname "old-k8s-version-642215"
	I0911 12:07:54.342986 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.343133 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.345466 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.345848 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.345881 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.346024 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.346214 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.346366 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.346491 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.346713 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.347165 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.347184 2255304 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642215 && echo "old-k8s-version-642215" | sudo tee /etc/hostname
	I0911 12:07:54.487005 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642215
	
	I0911 12:07:54.487058 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.489843 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.490146 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.490175 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.490378 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.490603 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.490774 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.490931 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.491146 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.491586 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.491612 2255304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642215/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:07:54.631441 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:54.631474 2255304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:07:54.631500 2255304 buildroot.go:174] setting up certificates
	I0911 12:07:54.631513 2255304 provision.go:83] configureAuth start
	I0911 12:07:54.631525 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.631988 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:54.634992 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.635411 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.635448 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.635700 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.638219 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.638608 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.638646 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.638788 2255304 provision.go:138] copyHostCerts
	I0911 12:07:54.638870 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:07:54.638881 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:07:54.638957 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:07:54.639087 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:07:54.639099 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:07:54.639128 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:07:54.639278 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:07:54.639293 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:07:54.639322 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:07:54.639405 2255304 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642215 san=[192.168.61.58 192.168.61.58 localhost 127.0.0.1 minikube old-k8s-version-642215]
	I0911 12:07:54.792963 2255304 provision.go:172] copyRemoteCerts
	I0911 12:07:54.793027 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:07:54.793056 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.796196 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.796555 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.796592 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.796884 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.797124 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.797410 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.797620 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:54.895690 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0911 12:07:54.923392 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 12:07:54.951276 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:07:54.979345 2255304 provision.go:86] duration metric: configureAuth took 347.814948ms
	I0911 12:07:54.979383 2255304 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:07:54.979690 2255304 config.go:182] Loaded profile config "old-k8s-version-642215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0911 12:07:54.979805 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.982955 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.983405 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.983448 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.983618 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.983822 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.984020 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.984190 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.984377 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.984924 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.984948 2255304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:07:55.330958 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:07:55.330995 2255304 machine.go:91] provisioned docker machine in 988.429681ms
	I0911 12:07:55.331008 2255304 start.go:300] post-start starting for "old-k8s-version-642215" (driver="kvm2")
	I0911 12:07:55.331021 2255304 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:07:55.331049 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.331490 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:07:55.331536 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.334936 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.335425 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.335467 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.335645 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.335902 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.336075 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.336290 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.439126 2255304 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:07:55.445330 2255304 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:07:55.445370 2255304 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:07:55.445453 2255304 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:07:55.445564 2255304 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:07:55.445692 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:07:55.455235 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:55.480979 2255304 start.go:303] post-start completed in 149.950869ms
	I0911 12:07:55.481014 2255304 fix.go:56] fixHost completed within 23.694753941s
	I0911 12:07:55.481046 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.484222 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.484612 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.484647 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.484879 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.485159 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.485352 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.485527 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.485696 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:55.486109 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:55.486122 2255304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:07:55.614312 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434075.554093051
	
	I0911 12:07:55.614344 2255304 fix.go:206] guest clock: 1694434075.554093051
	I0911 12:07:55.614355 2255304 fix.go:219] Guest: 2023-09-11 12:07:55.554093051 +0000 UTC Remote: 2023-09-11 12:07:55.481020512 +0000 UTC m=+302.412352865 (delta=73.072539ms)
	I0911 12:07:55.614409 2255304 fix.go:190] guest clock delta is within tolerance: 73.072539ms
	I0911 12:07:55.614423 2255304 start.go:83] releasing machines lock for "old-k8s-version-642215", held for 23.828210342s
	I0911 12:07:55.614465 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.614816 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:55.617993 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.618444 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.618489 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.618674 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619275 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619473 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619611 2255304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:07:55.619674 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.619732 2255304 ssh_runner.go:195] Run: cat /version.json
	I0911 12:07:55.619767 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.622428 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.622846 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.622873 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.622894 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.623012 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.623191 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.623279 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.623302 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.623399 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.623465 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.623543 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.623615 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.623747 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.623891 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.742462 2255304 ssh_runner.go:195] Run: systemctl --version
	I0911 12:07:55.748982 2255304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:07:55.906639 2255304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:07:55.914088 2255304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:07:55.914183 2255304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:07:55.938200 2255304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:07:55.938240 2255304 start.go:466] detecting cgroup driver to use...
	I0911 12:07:55.938333 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:07:55.965549 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:07:55.986227 2255304 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:07:55.986308 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:07:56.003370 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:07:56.025702 2255304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:07:56.158835 2255304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:07:56.311687 2255304 docker.go:212] disabling docker service ...
	I0911 12:07:56.311770 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:07:56.337492 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:07:56.355858 2255304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:07:56.486823 2255304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:07:56.617414 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:07:56.634057 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:07:56.658242 2255304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0911 12:07:56.658370 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.670146 2255304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:07:56.670252 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.681790 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.695832 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.707434 2255304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:07:56.718631 2255304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:07:56.729355 2255304 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:07:56.729436 2255304 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:07:56.744591 2255304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:07:56.755374 2255304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:07:56.906693 2255304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:07:57.131296 2255304 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:07:57.131439 2255304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:07:57.137554 2255304 start.go:534] Will wait 60s for crictl version
	I0911 12:07:57.137645 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:07:57.141720 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:07:57.178003 2255304 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:07:57.178110 2255304 ssh_runner.go:195] Run: crio --version
	I0911 12:07:57.236871 2255304 ssh_runner.go:195] Run: crio --version
	I0911 12:07:57.303639 2255304 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0911 12:07:52.901170 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:53.401940 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:53.430776 2255187 api_server.go:72] duration metric: took 3.064926262s to wait for apiserver process to appear ...
	I0911 12:07:53.430809 2255187 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:07:53.430837 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:53.431478 2255187 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0911 12:07:53.431528 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:53.431982 2255187 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0911 12:07:53.932765 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.216903 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:07:56.216947 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:07:56.216964 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.322957 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:07:56.322994 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:07:56.432419 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.444961 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:07:56.445016 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:07:56.932209 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.942202 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:07:56.942242 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:07:57.432361 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:57.440671 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 200:
	ok
	I0911 12:07:57.453348 2255187 api_server.go:141] control plane version: v1.28.1
	I0911 12:07:57.453393 2255187 api_server.go:131] duration metric: took 4.0225758s to wait for apiserver health ...
	I0911 12:07:57.453408 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:07:57.453418 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:07:57.455939 2255187 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:07:57.457968 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:07:57.488156 2255187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:07:57.524742 2255187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:07:57.543532 2255187 system_pods.go:59] 8 kube-system pods found
	I0911 12:07:57.543601 2255187 system_pods.go:61] "coredns-5dd5756b68-pkzcf" [4a44c7ec-bb5b-40f0-8d44-d5b77666cb95] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:07:57.543616 2255187 system_pods.go:61] "etcd-embed-certs-235462" [c14f9910-0d1d-4494-9ebe-97173ab9abe9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:07:57.543671 2255187 system_pods.go:61] "kube-apiserver-embed-certs-235462" [4d95f49f-f9ad-40ce-9101-7e67ad978353] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:07:57.543686 2255187 system_pods.go:61] "kube-controller-manager-embed-certs-235462" [753eea69-23f4-46f8-b631-36cf0f34d663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:07:57.543701 2255187 system_pods.go:61] "kube-proxy-v24dz" [e527b198-cf8f-4ada-af22-7979b249efd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:07:57.543711 2255187 system_pods.go:61] "kube-scheduler-embed-certs-235462" [b092d336-c45d-4b2c-87a5-df253a5fddbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:07:57.543722 2255187 system_pods.go:61] "metrics-server-57f55c9bc5-ldjwn" [4761a51f-8912-4be4-aa1d-2574e10da791] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:07:57.543735 2255187 system_pods.go:61] "storage-provisioner" [810336ff-14a1-4b3d-a4ff-2569f3710bab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:07:57.543744 2255187 system_pods.go:74] duration metric: took 18.975758ms to wait for pod list to return data ...
	I0911 12:07:57.543770 2255187 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:07:57.550468 2255187 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:07:57.550512 2255187 node_conditions.go:123] node cpu capacity is 2
	I0911 12:07:57.550527 2255187 node_conditions.go:105] duration metric: took 6.741621ms to run NodePressure ...
	I0911 12:07:57.550552 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:55.644857 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Start
	I0911 12:07:55.645094 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring networks are active...
	I0911 12:07:55.646010 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring network default is active
	I0911 12:07:55.646393 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring network mk-default-k8s-diff-port-484027 is active
	I0911 12:07:55.646808 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Getting domain xml...
	I0911 12:07:55.647513 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Creating domain...
	I0911 12:07:57.083879 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting to get IP...
	I0911 12:07:57.084769 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.085290 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.085361 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.085279 2256448 retry.go:31] will retry after 226.596764ms: waiting for machine to come up
	I0911 12:07:57.313593 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.314083 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.314106 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.314029 2256448 retry.go:31] will retry after 315.605673ms: waiting for machine to come up
	I0911 12:07:57.631774 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.632292 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.632329 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.632179 2256448 retry.go:31] will retry after 400.211275ms: waiting for machine to come up
	I0911 12:07:58.034189 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.305610 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:57.309276 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:57.309677 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:57.309721 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:57.310066 2255304 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0911 12:07:57.316611 2255304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:57.335580 2255304 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0911 12:07:57.335689 2255304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:57.380592 2255304 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0911 12:07:57.380690 2255304 ssh_runner.go:195] Run: which lz4
	I0911 12:07:57.386023 2255304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:07:57.391807 2255304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:07:57.391861 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0911 12:07:58.002314 2255187 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:07:58.010948 2255187 kubeadm.go:787] kubelet initialised
	I0911 12:07:58.010981 2255187 kubeadm.go:788] duration metric: took 8.627903ms waiting for restarted kubelet to initialise ...
	I0911 12:07:58.010993 2255187 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:07:58.020253 2255187 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.027844 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.027876 2255187 pod_ready.go:81] duration metric: took 7.583678ms waiting for pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.027888 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.027900 2255187 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.050283 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "etcd-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.050321 2255187 pod_ready.go:81] duration metric: took 22.413628ms waiting for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.050352 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "etcd-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.050369 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.060314 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.060348 2255187 pod_ready.go:81] duration metric: took 9.962502ms waiting for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.060360 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.060371 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.069122 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.069152 2255187 pod_ready.go:81] duration metric: took 8.771982ms waiting for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.069164 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.069176 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v24dz" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:59.329758 2255187 pod_ready.go:92] pod "kube-proxy-v24dz" in "kube-system" namespace has status "Ready":"True"
	I0911 12:07:59.329789 2255187 pod_ready.go:81] duration metric: took 1.260592229s waiting for pod "kube-proxy-v24dz" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:59.329804 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:01.526483 2255187 pod_ready.go:102] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"False"
	I0911 12:07:58.034838 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.037141 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:58.034724 2256448 retry.go:31] will retry after 394.484585ms: waiting for machine to come up
	I0911 12:07:58.431365 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.431982 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.432004 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:58.431886 2256448 retry.go:31] will retry after 593.506569ms: waiting for machine to come up
	I0911 12:07:59.026841 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.027490 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.027518 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:59.027389 2256448 retry.go:31] will retry after 666.166785ms: waiting for machine to come up
	I0911 12:07:59.694652 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.695161 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.695191 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:59.695113 2256448 retry.go:31] will retry after 975.320046ms: waiting for machine to come up
	I0911 12:08:00.672258 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:00.672804 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:00.672851 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:00.672755 2256448 retry.go:31] will retry after 1.161656415s: waiting for machine to come up
	I0911 12:08:01.835653 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:01.836186 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:01.836223 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:01.836130 2256448 retry.go:31] will retry after 1.505608393s: waiting for machine to come up
	I0911 12:07:59.503695 2255304 crio.go:444] Took 2.117718 seconds to copy over tarball
	I0911 12:07:59.503800 2255304 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:08:02.939001 2255304 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.435164165s)
	I0911 12:08:02.939037 2255304 crio.go:451] Took 3.435307 seconds to extract the tarball
	I0911 12:08:02.939050 2255304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:08:02.984446 2255304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:03.037419 2255304 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0911 12:08:03.037452 2255304 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 12:08:03.037546 2255304 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.037582 2255304 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.037597 2255304 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.037628 2255304 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.037583 2255304 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.037607 2255304 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0911 12:08:03.037551 2255304 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.037549 2255304 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.039413 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.039639 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.039650 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.039650 2255304 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.039819 2255304 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.039854 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.040031 2255304 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.040241 2255304 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0911 12:08:03.815561 2255187 pod_ready.go:102] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:04.614171 2255187 pod_ready.go:92] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:04.614199 2255187 pod_ready.go:81] duration metric: took 5.28438743s waiting for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:04.614211 2255187 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:06.638688 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:03.343936 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:03.353931 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:03.353970 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:03.344315 2256448 retry.go:31] will retry after 1.414606279s: waiting for machine to come up
	I0911 12:08:04.761183 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:04.761667 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:04.761695 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:04.761607 2256448 retry.go:31] will retry after 1.846261641s: waiting for machine to come up
	I0911 12:08:06.609258 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:06.609917 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:06.609965 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:06.609851 2256448 retry.go:31] will retry after 2.938814697s: waiting for machine to come up
	I0911 12:08:03.225129 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.227566 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.231565 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.233817 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.239841 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0911 12:08:03.243250 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.247155 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.522779 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.711354 2255304 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0911 12:08:03.711381 2255304 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0911 12:08:03.711438 2255304 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0911 12:08:03.711473 2255304 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.711501 2255304 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0911 12:08:03.711514 2255304 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0911 12:08:03.711530 2255304 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0911 12:08:03.711602 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711641 2255304 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0911 12:08:03.711678 2255304 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.711735 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711536 2255304 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.711823 2255304 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0911 12:08:03.711854 2255304 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.711856 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711894 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711475 2255304 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.711934 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711541 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711474 2255304 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.712005 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.823116 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0911 12:08:03.823136 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.823232 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.823349 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.823374 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.823429 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.823499 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.957383 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0911 12:08:03.957459 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0911 12:08:03.957513 2255304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0911 12:08:03.957521 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0911 12:08:03.957564 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0911 12:08:03.957649 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0911 12:08:03.957707 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0911 12:08:03.957743 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0911 12:08:03.962841 2255304 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0911 12:08:03.962863 2255304 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0911 12:08:03.962905 2255304 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0911 12:08:05.018464 2255304 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.055478429s)
	I0911 12:08:05.018510 2255304 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0911 12:08:05.018571 2255304 cache_images.go:92] LoadImages completed in 1.981102195s
	W0911 12:08:05.018661 2255304 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0911 12:08:05.018747 2255304 ssh_runner.go:195] Run: crio config
	I0911 12:08:05.107550 2255304 cni.go:84] Creating CNI manager for ""
	I0911 12:08:05.107585 2255304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:05.107614 2255304 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:08:05.107641 2255304 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.58 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642215 NodeName:old-k8s-version-642215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0911 12:08:05.107908 2255304 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642215"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-642215
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.58:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:08:05.108027 2255304 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642215 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-642215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:08:05.108106 2255304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0911 12:08:05.120210 2255304 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:08:05.120311 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:08:05.129517 2255304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0911 12:08:05.151855 2255304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:08:05.169543 2255304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0911 12:08:05.190304 2255304 ssh_runner.go:195] Run: grep 192.168.61.58	control-plane.minikube.internal$ /etc/hosts
	I0911 12:08:05.196014 2255304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:05.211627 2255304 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215 for IP: 192.168.61.58
	I0911 12:08:05.211663 2255304 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:05.211876 2255304 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:08:05.211943 2255304 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:08:05.212043 2255304 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/client.key
	I0911 12:08:05.212130 2255304 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.key.7152e027
	I0911 12:08:05.212217 2255304 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.key
	I0911 12:08:05.212397 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:08:05.212451 2255304 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:08:05.212467 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:08:05.212500 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:08:05.212531 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:08:05.212568 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:08:05.212637 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:05.213373 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:08:05.242362 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 12:08:05.272949 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:08:05.299359 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:08:05.326203 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:08:05.354388 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:08:05.385150 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:08:05.415683 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:08:05.449119 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:08:05.476397 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:08:05.503652 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:08:05.531520 2255304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:08:05.550108 2255304 ssh_runner.go:195] Run: openssl version
	I0911 12:08:05.556982 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:08:05.569083 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.574490 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.574570 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.581479 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:08:05.596824 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:08:05.607900 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.613627 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.613711 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.620309 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:08:05.630995 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:08:05.645786 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.652682 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.652773 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.660784 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:08:05.675417 2255304 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:08:05.681969 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:08:05.690345 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:08:05.697454 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:08:05.706283 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:08:05.712913 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:08:05.719308 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:08:05.726307 2255304 kubeadm.go:404] StartCluster: {Name:old-k8s-version-642215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-642215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:08:05.726414 2255304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:08:05.726478 2255304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:05.765092 2255304 cri.go:89] found id: ""
	I0911 12:08:05.765172 2255304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:08:05.775654 2255304 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:08:05.775681 2255304 kubeadm.go:636] restartCluster start
	I0911 12:08:05.775749 2255304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:08:05.785235 2255304 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:05.786289 2255304 kubeconfig.go:92] found "old-k8s-version-642215" server: "https://192.168.61.58:8443"
	I0911 12:08:05.789768 2255304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:08:05.799009 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:05.799092 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:05.811208 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:05.811235 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:05.811301 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:05.822223 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:06.322909 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:06.323053 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:06.337866 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:06.823220 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:06.823328 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:06.839573 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:07.323145 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:07.323245 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:07.335054 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:07.822427 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:07.822536 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:07.834385 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.146768 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:11.637314 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:09.552075 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:09.552494 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:09.552520 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:09.552442 2256448 retry.go:31] will retry after 3.623277093s: waiting for machine to come up
	I0911 12:08:08.323215 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:08.323301 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:08.335501 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:08.822942 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:08.823061 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:08.840055 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.322586 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:09.322692 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:09.338101 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.822702 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:09.822845 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:09.835245 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:10.322666 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:10.322750 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:10.337101 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:10.822530 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:10.822662 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:10.838511 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:11.323206 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:11.323329 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:11.338239 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:11.822952 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:11.823044 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:11.838752 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:12.323296 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:12.323384 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:12.335174 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:12.822659 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:12.822775 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:12.834762 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:13.637784 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:16.138584 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:13.178553 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:13.179008 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:13.179041 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:13.178961 2256448 retry.go:31] will retry after 3.636806595s: waiting for machine to come up
	I0911 12:08:16.818087 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.818548 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has current primary IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.818583 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Found IP for machine: 192.168.39.230
	I0911 12:08:16.818600 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Reserving static IP address...
	I0911 12:08:16.819118 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-484027", mac: "52:54:00:b1:16:75", ip: "192.168.39.230"} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.819156 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Reserved static IP address: 192.168.39.230
	I0911 12:08:16.819182 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | skip adding static IP to network mk-default-k8s-diff-port-484027 - found existing host DHCP lease matching {name: "default-k8s-diff-port-484027", mac: "52:54:00:b1:16:75", ip: "192.168.39.230"}
	I0911 12:08:16.819204 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Getting to WaitForSSH function...
	I0911 12:08:16.819221 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for SSH to be available...
	I0911 12:08:16.821746 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.822235 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.822270 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.822454 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Using SSH client type: external
	I0911 12:08:16.822500 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa (-rw-------)
	I0911 12:08:16.822551 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:08:16.822576 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | About to run SSH command:
	I0911 12:08:16.822590 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | exit 0
	I0911 12:08:16.957464 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | SSH cmd err, output: <nil>: 
	I0911 12:08:16.957845 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetConfigRaw
	I0911 12:08:16.958573 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:16.961262 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.961726 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.961762 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.962073 2255814 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/config.json ...
	I0911 12:08:16.962281 2255814 machine.go:88] provisioning docker machine ...
	I0911 12:08:16.962301 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:16.962594 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:16.962777 2255814 buildroot.go:166] provisioning hostname "default-k8s-diff-port-484027"
	I0911 12:08:16.962799 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:16.962971 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:16.965571 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.966095 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.966134 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.966313 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:16.966531 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:16.966685 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:16.966837 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:16.967021 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:16.967739 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:16.967764 2255814 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-484027 && echo "default-k8s-diff-port-484027" | sudo tee /etc/hostname
	I0911 12:08:17.106967 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-484027
	
	I0911 12:08:17.107036 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.110243 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.110663 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.110737 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.110953 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.111197 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.111388 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.111526 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.111782 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:17.112200 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:17.112223 2255814 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-484027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-484027/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-484027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:08:17.238410 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:08:17.238450 2255814 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:08:17.238508 2255814 buildroot.go:174] setting up certificates
	I0911 12:08:17.238520 2255814 provision.go:83] configureAuth start
	I0911 12:08:17.238536 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:17.238938 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:17.241635 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.242044 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.242106 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.242209 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.244737 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.245093 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.245117 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.245295 2255814 provision.go:138] copyHostCerts
	I0911 12:08:17.245360 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:08:17.245375 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:08:17.245434 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:08:17.245530 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:08:17.245537 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:08:17.245557 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:08:17.245627 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:08:17.245634 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:08:17.245651 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:08:17.245708 2255814 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-484027 san=[192.168.39.230 192.168.39.230 localhost 127.0.0.1 minikube default-k8s-diff-port-484027]
	I0911 12:08:17.540142 2255814 provision.go:172] copyRemoteCerts
	I0911 12:08:17.540233 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:08:17.540270 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.543823 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.544237 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.544277 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.544485 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.544706 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.544916 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.545060 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:17.645425 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:08:17.675288 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0911 12:08:17.703043 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 12:08:17.732683 2255814 provision.go:86] duration metric: configureAuth took 494.12506ms
	I0911 12:08:17.732713 2255814 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:08:17.732955 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:17.733076 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.736740 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.737204 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.737244 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.737464 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.737707 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.737914 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.738084 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.738324 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:17.738749 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:17.738774 2255814 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:08:13.323070 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:13.323174 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:13.334828 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:13.822403 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:13.822490 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:13.834374 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:14.323004 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:14.323100 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:14.334774 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:14.822351 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:14.822465 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:14.834368 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:15.323045 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:15.323154 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:15.334863 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:15.799700 2255304 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:08:15.799736 2255304 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:08:15.799751 2255304 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:08:15.799821 2255304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:15.831051 2255304 cri.go:89] found id: ""
	I0911 12:08:15.831140 2255304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:08:15.847072 2255304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:08:15.856353 2255304 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:08:15.856425 2255304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:15.865711 2255304 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:15.865740 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:15.990047 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.312314 2255304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.322225408s)
	I0911 12:08:17.312354 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.521733 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.627343 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.723857 2255304 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:17.723964 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:17.742688 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:18.336038 2255048 start.go:369] acquired machines lock for "no-preload-352076" in 1m2.388468349s
	I0911 12:08:18.336100 2255048 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:08:18.336125 2255048 fix.go:54] fixHost starting: 
	I0911 12:08:18.336615 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:18.336663 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:18.355715 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0911 12:08:18.356243 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:18.356901 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:08:18.356931 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:18.357385 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:18.357585 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:18.357787 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:08:18.359541 2255048 fix.go:102] recreateIfNeeded on no-preload-352076: state=Stopped err=<nil>
	I0911 12:08:18.359571 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	W0911 12:08:18.359750 2255048 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:08:18.361628 2255048 out.go:177] * Restarting existing kvm2 VM for "no-preload-352076" ...
	I0911 12:08:18.363286 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Start
	I0911 12:08:18.363532 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring networks are active...
	I0911 12:08:18.364515 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring network default is active
	I0911 12:08:18.364894 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring network mk-no-preload-352076 is active
	I0911 12:08:18.365345 2255048 main.go:141] libmachine: (no-preload-352076) Getting domain xml...
	I0911 12:08:18.366191 2255048 main.go:141] libmachine: (no-preload-352076) Creating domain...
	I0911 12:08:18.078952 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:08:18.078979 2255814 machine.go:91] provisioned docker machine in 1.116684764s
	I0911 12:08:18.078991 2255814 start.go:300] post-start starting for "default-k8s-diff-port-484027" (driver="kvm2")
	I0911 12:08:18.079011 2255814 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:08:18.079057 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.079482 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:08:18.079520 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.082212 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.082641 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.082674 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.082810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.083043 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.083227 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.083403 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.170810 2255814 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:08:18.175342 2255814 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:08:18.175370 2255814 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:08:18.175457 2255814 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:08:18.175583 2255814 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:08:18.175722 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:08:18.184543 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:18.209487 2255814 start.go:303] post-start completed in 130.475291ms
	I0911 12:08:18.209516 2255814 fix.go:56] fixHost completed within 22.594854569s
	I0911 12:08:18.209540 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.212339 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.212779 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.212832 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.212967 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.213187 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.213366 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.213515 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.213680 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:18.214071 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:18.214083 2255814 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:08:18.335862 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434098.277311369
	
	I0911 12:08:18.335893 2255814 fix.go:206] guest clock: 1694434098.277311369
	I0911 12:08:18.335902 2255814 fix.go:219] Guest: 2023-09-11 12:08:18.277311369 +0000 UTC Remote: 2023-09-11 12:08:18.20951981 +0000 UTC m=+200.212950109 (delta=67.791559ms)
	I0911 12:08:18.335925 2255814 fix.go:190] guest clock delta is within tolerance: 67.791559ms
	I0911 12:08:18.335932 2255814 start.go:83] releasing machines lock for "default-k8s-diff-port-484027", held for 22.721324127s
	I0911 12:08:18.335977 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.336342 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:18.339935 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.340372 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.340411 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.340801 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341526 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341746 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341832 2255814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:08:18.341895 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.342153 2255814 ssh_runner.go:195] Run: cat /version.json
	I0911 12:08:18.342219 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.345331 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.345619 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.345716 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.345751 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.346068 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.346282 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.346367 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.346409 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.346443 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.346624 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.346803 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.346960 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.347119 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.347284 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.455877 2255814 ssh_runner.go:195] Run: systemctl --version
	I0911 12:08:18.463787 2255814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:08:18.620444 2255814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:08:18.628878 2255814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:08:18.628972 2255814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:08:18.652267 2255814 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:08:18.652301 2255814 start.go:466] detecting cgroup driver to use...
	I0911 12:08:18.652381 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:08:18.672306 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:08:18.690514 2255814 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:08:18.690594 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:08:18.709032 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:08:18.727521 2255814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:08:18.859864 2255814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:08:19.005708 2255814 docker.go:212] disabling docker service ...
	I0911 12:08:19.005809 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:08:19.026177 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:08:19.043931 2255814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:08:19.184060 2255814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:08:19.305184 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:08:19.326550 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:08:19.351313 2255814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:08:19.351400 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.366747 2255814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:08:19.366836 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.382272 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.395743 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.408786 2255814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:08:19.424229 2255814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:08:19.438367 2255814 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:08:19.438450 2255814 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:08:19.457417 2255814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:08:19.470001 2255814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:08:19.629977 2255814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:08:19.846900 2255814 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:08:19.846994 2255814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:08:19.854282 2255814 start.go:534] Will wait 60s for crictl version
	I0911 12:08:19.854378 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:08:19.859252 2255814 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:08:19.897263 2255814 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:08:19.897349 2255814 ssh_runner.go:195] Run: crio --version
	I0911 12:08:19.966155 2255814 ssh_runner.go:195] Run: crio --version
	I0911 12:08:20.024697 2255814 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:08:18.639188 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:20.649395 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:20.026156 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:20.029726 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:20.030249 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:20.030286 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:20.030572 2255814 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 12:08:20.035523 2255814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:20.053903 2255814 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:08:20.053997 2255814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:20.096570 2255814 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:08:20.096666 2255814 ssh_runner.go:195] Run: which lz4
	I0911 12:08:20.102350 2255814 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:08:20.107338 2255814 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:08:20.107385 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:08:22.215033 2255814 crio.go:444] Took 2.112735 seconds to copy over tarball
	I0911 12:08:22.215168 2255814 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:08:18.262191 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:18.762029 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:19.262094 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:19.316271 2255304 api_server.go:72] duration metric: took 1.592409696s to wait for apiserver process to appear ...
	I0911 12:08:19.316309 2255304 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:19.316329 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:19.892254 2255048 main.go:141] libmachine: (no-preload-352076) Waiting to get IP...
	I0911 12:08:19.893353 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:19.893857 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:19.893939 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:19.893867 2256639 retry.go:31] will retry after 256.490953ms: waiting for machine to come up
	I0911 12:08:20.152717 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.153686 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.153718 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.153662 2256639 retry.go:31] will retry after 308.528476ms: waiting for machine to come up
	I0911 12:08:20.464569 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.465179 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.465240 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.465150 2256639 retry.go:31] will retry after 329.79495ms: waiting for machine to come up
	I0911 12:08:20.797010 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.797581 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.797615 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.797512 2256639 retry.go:31] will retry after 388.108578ms: waiting for machine to come up
	I0911 12:08:21.187304 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:21.187980 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:21.188006 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:21.187878 2256639 retry.go:31] will retry after 547.488463ms: waiting for machine to come up
	I0911 12:08:21.736835 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:21.737425 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:21.737466 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:21.737352 2256639 retry.go:31] will retry after 669.118316ms: waiting for machine to come up
	I0911 12:08:22.407727 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:22.408435 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:22.408471 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:22.408353 2256639 retry.go:31] will retry after 986.70059ms: waiting for machine to come up
	I0911 12:08:23.139403 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:25.141299 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:27.493149 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:25.680145 2255814 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.464917771s)
	I0911 12:08:25.680187 2255814 crio.go:451] Took 3.465097 seconds to extract the tarball
	I0911 12:08:25.680201 2255814 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:08:25.721940 2255814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:25.770149 2255814 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:08:25.770189 2255814 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:08:25.770296 2255814 ssh_runner.go:195] Run: crio config
	I0911 12:08:25.844108 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:08:25.844142 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:25.844170 2255814 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:08:25.844197 2255814 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-484027 NodeName:default-k8s-diff-port-484027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:08:25.844471 2255814 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-484027"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:08:25.844584 2255814 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-484027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0911 12:08:25.844751 2255814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:08:25.855558 2255814 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:08:25.855658 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:08:25.865531 2255814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0911 12:08:25.890631 2255814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:08:25.914304 2255814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0911 12:08:25.938065 2255814 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0911 12:08:25.943138 2255814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:25.963689 2255814 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027 for IP: 192.168.39.230
	I0911 12:08:25.963744 2255814 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:25.963968 2255814 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:08:25.964026 2255814 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:08:25.964139 2255814 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.key
	I0911 12:08:25.964245 2255814 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.key.165d62e4
	I0911 12:08:25.964309 2255814 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.key
	I0911 12:08:25.964546 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:08:25.964599 2255814 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:08:25.964618 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:08:25.964655 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:08:25.964699 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:08:25.964731 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:08:25.964805 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:25.965758 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:08:26.001391 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 12:08:26.032345 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:08:26.065593 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:08:26.100792 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:08:26.135603 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:08:26.170029 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:08:26.203119 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:08:26.232040 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:08:26.262353 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:08:26.292733 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:08:26.326750 2255814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:08:26.346334 2255814 ssh_runner.go:195] Run: openssl version
	I0911 12:08:26.353175 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:08:26.365742 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.372007 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.372086 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.378954 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:08:26.390365 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:08:26.403147 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.410930 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.411048 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.419889 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:08:26.433366 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:08:26.445752 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.452481 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.452563 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.461097 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:08:26.477855 2255814 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:08:26.483947 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:08:26.492879 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:08:26.501391 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:08:26.510124 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:08:26.518732 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:08:26.527356 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:08:26.536063 2255814 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:08:26.536225 2255814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:08:26.536300 2255814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:26.575522 2255814 cri.go:89] found id: ""
	I0911 12:08:26.575617 2255814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:08:26.586011 2255814 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:08:26.586043 2255814 kubeadm.go:636] restartCluster start
	I0911 12:08:26.586114 2255814 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:08:26.596758 2255814 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:26.598534 2255814 kubeconfig.go:92] found "default-k8s-diff-port-484027" server: "https://192.168.39.230:8444"
	I0911 12:08:26.603031 2255814 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:08:26.617921 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:26.618066 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:26.632719 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:26.632739 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:26.632793 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:26.650036 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:27.150299 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:27.150397 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:27.165783 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:27.650311 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:27.650416 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:27.665184 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:24.317268 2255304 api_server.go:269] stopped: https://192.168.61.58:8443/healthz: Get "https://192.168.61.58:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0911 12:08:24.317328 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:26.742901 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:26.742942 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:27.243118 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:27.654196 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0911 12:08:27.654260 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0911 12:08:27.743438 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:27.767557 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0911 12:08:27.767607 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0911 12:08:28.243610 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:28.251858 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 200:
	ok
	I0911 12:08:28.262619 2255304 api_server.go:141] control plane version: v1.16.0
	I0911 12:08:28.262659 2255304 api_server.go:131] duration metric: took 8.946341912s to wait for apiserver health ...
	I0911 12:08:28.262670 2255304 cni.go:84] Creating CNI manager for ""
	I0911 12:08:28.262676 2255304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:28.264705 2255304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:08:23.396798 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:23.398997 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:23.399029 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:23.397251 2256639 retry.go:31] will retry after 1.384367074s: waiting for machine to come up
	I0911 12:08:24.783036 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:24.783547 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:24.783584 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:24.783489 2256639 retry.go:31] will retry after 1.172643107s: waiting for machine to come up
	I0911 12:08:25.958217 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:25.958989 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:25.959024 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:25.958929 2256639 retry.go:31] will retry after 2.243377044s: waiting for machine to come up
	I0911 12:08:28.205538 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:28.206196 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:28.206226 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:28.206137 2256639 retry.go:31] will retry after 1.83460511s: waiting for machine to come up
	I0911 12:08:28.266346 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:08:28.280404 2255304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:08:28.308228 2255304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:28.317951 2255304 system_pods.go:59] 7 kube-system pods found
	I0911 12:08:28.317994 2255304 system_pods.go:61] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:28.318002 2255304 system_pods.go:61] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:28.318010 2255304 system_pods.go:61] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:28.318024 2255304 system_pods.go:61] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Pending
	I0911 12:08:28.318030 2255304 system_pods.go:61] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:28.318035 2255304 system_pods.go:61] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:28.318039 2255304 system_pods.go:61] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:28.318045 2255304 system_pods.go:74] duration metric: took 9.788007ms to wait for pod list to return data ...
	I0911 12:08:28.318055 2255304 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:28.323536 2255304 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:28.323578 2255304 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:28.323593 2255304 node_conditions.go:105] duration metric: took 5.532859ms to run NodePressure ...
	I0911 12:08:28.323619 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:28.927871 2255304 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:08:28.938224 2255304 kubeadm.go:787] kubelet initialised
	I0911 12:08:28.938256 2255304 kubeadm.go:788] duration metric: took 10.348938ms waiting for restarted kubelet to initialise ...
	I0911 12:08:28.938267 2255304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:28.944405 2255304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.951735 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.951774 2255304 pod_ready.go:81] duration metric: took 7.334386ms waiting for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.951786 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.951800 2255304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.964451 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "etcd-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.964487 2255304 pod_ready.go:81] duration metric: took 12.678175ms waiting for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.964499 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "etcd-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.964510 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.971472 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.971503 2255304 pod_ready.go:81] duration metric: took 6.983445ms waiting for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.971514 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.971523 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.978657 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.978691 2255304 pod_ready.go:81] duration metric: took 7.156987ms waiting for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.978704 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.978728 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:29.334593 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-proxy-855lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.334652 2255304 pod_ready.go:81] duration metric: took 355.905465ms waiting for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:29.334670 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-proxy-855lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.334683 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:29.734221 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.734262 2255304 pod_ready.go:81] duration metric: took 399.567918ms waiting for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:29.734275 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.734287 2255304 pod_ready.go:38] duration metric: took 796.006553ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:29.734313 2255304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:08:29.749280 2255304 ops.go:34] apiserver oom_adj: -16
	I0911 12:08:29.749313 2255304 kubeadm.go:640] restartCluster took 23.973623788s
	I0911 12:08:29.749325 2255304 kubeadm.go:406] StartCluster complete in 24.023033441s
	I0911 12:08:29.749349 2255304 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:29.749453 2255304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:08:29.752216 2255304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:29.752582 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:08:29.752784 2255304 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:08:29.752912 2255304 config.go:182] Loaded profile config "old-k8s-version-642215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0911 12:08:29.752947 2255304 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-642215"
	I0911 12:08:29.752971 2255304 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-642215"
	I0911 12:08:29.752976 2255304 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-642215"
	I0911 12:08:29.753016 2255304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-642215"
	W0911 12:08:29.752979 2255304 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:08:29.753159 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.752984 2255304 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-642215"
	I0911 12:08:29.753232 2255304 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-642215"
	W0911 12:08:29.753281 2255304 addons.go:240] addon metrics-server should already be in state true
	I0911 12:08:29.753369 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.753517 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.753554 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.753599 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.753630 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.753954 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.754016 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.773524 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0911 12:08:29.773614 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0911 12:08:29.774181 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.774418 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.774950 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.774967 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.775141 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0911 12:08:29.775158 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.775176 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.775584 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.775585 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.775597 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.775756 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.776112 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.776144 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.776178 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.776197 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.776510 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.776970 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.776990 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.790443 2255304 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-642215" context rescaled to 1 replicas
	I0911 12:08:29.790502 2255304 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:08:29.793918 2255304 out.go:177] * Verifying Kubernetes components...
	I0911 12:08:29.796131 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:29.798116 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38829
	I0911 12:08:29.798581 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.799554 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.799580 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.800105 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.800439 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.802764 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.805061 2255304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:29.803246 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0911 12:08:29.807001 2255304 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:29.807025 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:08:29.807053 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.807866 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.807924 2255304 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-642215"
	W0911 12:08:29.807949 2255304 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:08:29.807985 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.808406 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.808442 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.809644 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.809667 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.817010 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.817046 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.817101 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.817131 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.817158 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.817555 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.817625 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.817868 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.818244 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:29.820240 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.822846 2255304 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:08:29.824505 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:08:29.824526 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:08:29.824554 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.827924 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.828359 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.828396 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.828684 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.828950 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.829099 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.829285 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:29.830900 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0911 12:08:29.831463 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.832028 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.832049 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.832646 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.833261 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.833313 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.868600 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0911 12:08:29.869171 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.869822 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.869842 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.870236 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.870416 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.872928 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.873212 2255304 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:29.873232 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:08:29.873255 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.876313 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.876963 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.876983 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.876999 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.877168 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.877331 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.877468 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:30.019745 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:30.061364 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:08:30.061393 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:08:30.080562 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:30.100494 2255304 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 12:08:30.100511 2255304 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-642215" to be "Ready" ...
	I0911 12:08:30.120618 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:08:30.120647 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:08:30.173391 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:30.173427 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:08:30.208772 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:30.757802 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.757841 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.757982 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758021 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758294 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.758334 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758344 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758353 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758366 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758377 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.758620 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758646 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758660 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758677 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758690 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758701 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758717 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758743 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758943 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758954 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.759016 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.759052 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.759062 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.859384 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.859426 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.859828 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.859853 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.859864 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.859874 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.860302 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.860336 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.860357 2255304 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-642215"
	I0911 12:08:30.862687 2255304 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0911 12:08:29.637791 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:31.639796 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:28.150174 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:28.150294 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:28.166331 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:28.650905 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:28.650996 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:28.664146 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:29.150646 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:29.150745 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:29.166569 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:29.651031 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:29.651129 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:29.664106 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.150429 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:30.150535 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:30.167297 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.650364 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:30.650458 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:30.664180 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:31.150419 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:31.150521 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:31.168242 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:31.650834 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:31.650922 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:31.664772 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:32.150232 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:32.150362 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:32.163224 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:32.650676 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:32.650773 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:32.667077 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.864433 2255304 addons.go:502] enable addons completed in 1.111642638s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0911 12:08:32.139191 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:30.042388 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:30.043026 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:30.043054 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:30.042967 2256639 retry.go:31] will retry after 3.282840664s: waiting for machine to come up
	I0911 12:08:33.327456 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:33.328007 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:33.328066 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:33.327941 2256639 retry.go:31] will retry after 4.185053881s: waiting for machine to come up
	I0911 12:08:33.639996 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:36.139377 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:33.150668 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:33.150797 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:33.163178 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:33.650733 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:33.650845 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:33.666475 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:34.150939 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:34.151037 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:34.163985 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:34.650139 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:34.650250 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:34.664850 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:35.150224 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:35.150351 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:35.169894 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:35.650946 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:35.651044 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:35.665438 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:36.151019 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:36.151134 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:36.164843 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:36.618412 2255814 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:08:36.618460 2255814 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:08:36.618482 2255814 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:08:36.618571 2255814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:36.657264 2255814 cri.go:89] found id: ""
	I0911 12:08:36.657366 2255814 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:08:36.676222 2255814 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:08:36.686832 2255814 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:08:36.686923 2255814 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:36.699618 2255814 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:36.699654 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:36.842821 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.471899 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.699214 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.784721 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.870994 2255814 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:37.871085 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:37.894561 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:34.638777 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:37.138575 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:37.515376 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:37.515955 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:37.515989 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:37.515896 2256639 retry.go:31] will retry after 3.472863196s: waiting for machine to come up
	I0911 12:08:38.138433 2255304 node_ready.go:49] node "old-k8s-version-642215" has status "Ready":"True"
	I0911 12:08:38.138464 2255304 node_ready.go:38] duration metric: took 8.037923512s waiting for node "old-k8s-version-642215" to be "Ready" ...
	I0911 12:08:38.138475 2255304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:38.143177 2255304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.664411 2255304 pod_ready.go:92] pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.664449 2255304 pod_ready.go:81] duration metric: took 521.244524ms waiting for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.664463 2255304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.670838 2255304 pod_ready.go:92] pod "etcd-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.670876 2255304 pod_ready.go:81] duration metric: took 6.404356ms waiting for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.670890 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.679254 2255304 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.679284 2255304 pod_ready.go:81] duration metric: took 8.385069ms waiting for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.679299 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.939484 2255304 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.939514 2255304 pod_ready.go:81] duration metric: took 260.206232ms waiting for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.939529 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.337858 2255304 pod_ready.go:92] pod "kube-proxy-855lt" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:39.337894 2255304 pod_ready.go:81] duration metric: took 398.358394ms waiting for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.337907 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.738437 2255304 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:39.738465 2255304 pod_ready.go:81] duration metric: took 400.549428ms waiting for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.738479 2255304 pod_ready.go:38] duration metric: took 1.599991385s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:39.738509 2255304 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:39.738569 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.760727 2255304 api_server.go:72] duration metric: took 9.970181642s to wait for apiserver process to appear ...
	I0911 12:08:39.760774 2255304 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:39.760797 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:39.768195 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 200:
	ok
	I0911 12:08:39.769416 2255304 api_server.go:141] control plane version: v1.16.0
	I0911 12:08:39.769442 2255304 api_server.go:131] duration metric: took 8.658497ms to wait for apiserver health ...
	I0911 12:08:39.769457 2255304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:39.940647 2255304 system_pods.go:59] 7 kube-system pods found
	I0911 12:08:39.940683 2255304 system_pods.go:61] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:39.940701 2255304 system_pods.go:61] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:39.940708 2255304 system_pods.go:61] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:39.940715 2255304 system_pods.go:61] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Running
	I0911 12:08:39.940722 2255304 system_pods.go:61] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:39.940729 2255304 system_pods.go:61] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:39.940736 2255304 system_pods.go:61] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:39.940747 2255304 system_pods.go:74] duration metric: took 171.283587ms to wait for pod list to return data ...
	I0911 12:08:39.940763 2255304 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:08:40.139718 2255304 default_sa.go:45] found service account: "default"
	I0911 12:08:40.139751 2255304 default_sa.go:55] duration metric: took 198.981243ms for default service account to be created ...
	I0911 12:08:40.139763 2255304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:08:40.340959 2255304 system_pods.go:86] 7 kube-system pods found
	I0911 12:08:40.340998 2255304 system_pods.go:89] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:40.341008 2255304 system_pods.go:89] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:40.341015 2255304 system_pods.go:89] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:40.341028 2255304 system_pods.go:89] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Running
	I0911 12:08:40.341035 2255304 system_pods.go:89] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:40.341042 2255304 system_pods.go:89] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:40.341051 2255304 system_pods.go:89] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:40.341061 2255304 system_pods.go:126] duration metric: took 201.290886ms to wait for k8s-apps to be running ...
	I0911 12:08:40.341073 2255304 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:08:40.341163 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:40.359994 2255304 system_svc.go:56] duration metric: took 18.903474ms WaitForService to wait for kubelet.
	I0911 12:08:40.360036 2255304 kubeadm.go:581] duration metric: took 10.569498287s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:08:40.360063 2255304 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:40.538713 2255304 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:40.538748 2255304 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:40.538762 2255304 node_conditions.go:105] duration metric: took 178.692637ms to run NodePressure ...
	I0911 12:08:40.538778 2255304 start.go:228] waiting for startup goroutines ...
	I0911 12:08:40.538785 2255304 start.go:233] waiting for cluster config update ...
	I0911 12:08:40.538798 2255304 start.go:242] writing updated cluster config ...
	I0911 12:08:40.539175 2255304 ssh_runner.go:195] Run: rm -f paused
	I0911 12:08:40.601745 2255304 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0911 12:08:40.604230 2255304 out.go:177] 
	W0911 12:08:40.606184 2255304 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0911 12:08:40.607933 2255304 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0911 12:08:40.609870 2255304 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-642215" cluster and "default" namespace by default
	I0911 12:08:38.638441 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:40.639280 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:38.411419 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:38.910721 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.410710 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.911432 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:40.411115 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:40.438764 2255814 api_server.go:72] duration metric: took 2.567766062s to wait for apiserver process to appear ...
	I0911 12:08:40.438803 2255814 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:40.438828 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.439582 2255814 api_server.go:269] stopped: https://192.168.39.230:8444/healthz: Get "https://192.168.39.230:8444/healthz": dial tcp 192.168.39.230:8444: connect: connection refused
	I0911 12:08:40.439644 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.440098 2255814 api_server.go:269] stopped: https://192.168.39.230:8444/healthz: Get "https://192.168.39.230:8444/healthz": dial tcp 192.168.39.230:8444: connect: connection refused
	I0911 12:08:40.940202 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.989968 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.990485 2255048 main.go:141] libmachine: (no-preload-352076) Found IP for machine: 192.168.72.157
	I0911 12:08:40.990519 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has current primary IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.990530 2255048 main.go:141] libmachine: (no-preload-352076) Reserving static IP address...
	I0911 12:08:40.990910 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "no-preload-352076", mac: "52:54:00:91:89:e0", ip: "192.168.72.157"} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:40.990942 2255048 main.go:141] libmachine: (no-preload-352076) Reserved static IP address: 192.168.72.157
	I0911 12:08:40.991004 2255048 main.go:141] libmachine: (no-preload-352076) Waiting for SSH to be available...
	I0911 12:08:40.991024 2255048 main.go:141] libmachine: (no-preload-352076) DBG | skip adding static IP to network mk-no-preload-352076 - found existing host DHCP lease matching {name: "no-preload-352076", mac: "52:54:00:91:89:e0", ip: "192.168.72.157"}
	I0911 12:08:40.991044 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Getting to WaitForSSH function...
	I0911 12:08:40.994061 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.994417 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:40.994478 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.994612 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Using SSH client type: external
	I0911 12:08:40.994653 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa (-rw-------)
	I0911 12:08:40.994693 2255048 main.go:141] libmachine: (no-preload-352076) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:08:40.994711 2255048 main.go:141] libmachine: (no-preload-352076) DBG | About to run SSH command:
	I0911 12:08:40.994725 2255048 main.go:141] libmachine: (no-preload-352076) DBG | exit 0
	I0911 12:08:41.093865 2255048 main.go:141] libmachine: (no-preload-352076) DBG | SSH cmd err, output: <nil>: 
	I0911 12:08:41.094360 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetConfigRaw
	I0911 12:08:41.095142 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:41.098534 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.098944 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.098985 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.099319 2255048 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/config.json ...
	I0911 12:08:41.099667 2255048 machine.go:88] provisioning docker machine ...
	I0911 12:08:41.099711 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:41.100079 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.100503 2255048 buildroot.go:166] provisioning hostname "no-preload-352076"
	I0911 12:08:41.100535 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.100868 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.104253 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.104920 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.105102 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.105420 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.105864 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.106201 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.106627 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.106937 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.107432 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.107447 2255048 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-352076 && echo "no-preload-352076" | sudo tee /etc/hostname
	I0911 12:08:41.249885 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-352076
	
	I0911 12:08:41.249919 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.253419 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.253854 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.253892 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.254125 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.254373 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.254576 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.254752 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.254945 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.255592 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.255624 2255048 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-352076' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-352076/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-352076' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:08:41.394308 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:08:41.394348 2255048 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:08:41.394378 2255048 buildroot.go:174] setting up certificates
	I0911 12:08:41.394388 2255048 provision.go:83] configureAuth start
	I0911 12:08:41.394401 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.394737 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:41.398042 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.398506 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.398540 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.398747 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.401368 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.401743 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.401797 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.401939 2255048 provision.go:138] copyHostCerts
	I0911 12:08:41.402020 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:08:41.402034 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:08:41.402102 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:08:41.402226 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:08:41.402238 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:08:41.402278 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:08:41.402374 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:08:41.402386 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:08:41.402413 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:08:41.402501 2255048 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.no-preload-352076 san=[192.168.72.157 192.168.72.157 localhost 127.0.0.1 minikube no-preload-352076]
	I0911 12:08:41.717751 2255048 provision.go:172] copyRemoteCerts
	I0911 12:08:41.717828 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:08:41.717865 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.721152 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.721457 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.721499 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.721720 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.721943 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.722111 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.722261 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:41.818932 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 12:08:41.846852 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:08:41.875977 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0911 12:08:41.905364 2255048 provision.go:86] duration metric: configureAuth took 510.946609ms
	I0911 12:08:41.905401 2255048 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:08:41.905662 2255048 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:41.905762 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.909182 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.909656 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.909725 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.909913 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.910149 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.910342 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.910487 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.910649 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.911134 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.911154 2255048 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:08:42.260214 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:08:42.260254 2255048 machine.go:91] provisioned docker machine in 1.16057097s
	I0911 12:08:42.260268 2255048 start.go:300] post-start starting for "no-preload-352076" (driver="kvm2")
	I0911 12:08:42.260283 2255048 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:08:42.260307 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.260700 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:08:42.260738 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.263782 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.264157 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.264197 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.264358 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.264595 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.264808 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.265010 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.356470 2255048 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:08:42.361886 2255048 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:08:42.361921 2255048 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:08:42.362004 2255048 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:08:42.362082 2255048 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:08:42.362196 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:08:42.372005 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:42.400800 2255048 start.go:303] post-start completed in 140.51468ms
	I0911 12:08:42.400850 2255048 fix.go:56] fixHost completed within 24.064734762s
	I0911 12:08:42.400882 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.404351 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.404799 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.404862 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.405055 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.405297 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.405484 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.405644 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.405859 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:42.406477 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:42.406505 2255048 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:08:42.529978 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434122.467205529
	
	I0911 12:08:42.530008 2255048 fix.go:206] guest clock: 1694434122.467205529
	I0911 12:08:42.530020 2255048 fix.go:219] Guest: 2023-09-11 12:08:42.467205529 +0000 UTC Remote: 2023-09-11 12:08:42.400855668 +0000 UTC m=+369.043734805 (delta=66.349861ms)
	I0911 12:08:42.530049 2255048 fix.go:190] guest clock delta is within tolerance: 66.349861ms
	I0911 12:08:42.530062 2255048 start.go:83] releasing machines lock for "no-preload-352076", held for 24.19398788s
	I0911 12:08:42.530094 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.530397 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:42.533425 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.533777 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.533809 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.534032 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534670 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534881 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534986 2255048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:08:42.535048 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.535193 2255048 ssh_runner.go:195] Run: cat /version.json
	I0911 12:08:42.535235 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.538009 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538210 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538356 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.538386 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538551 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.538630 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.538658 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538748 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.538862 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.538939 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.539033 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.539212 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.539226 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.539373 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.659315 2255048 ssh_runner.go:195] Run: systemctl --version
	I0911 12:08:42.666117 2255048 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:08:42.827592 2255048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:08:42.834283 2255048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:08:42.834379 2255048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:08:42.855077 2255048 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:08:42.855107 2255048 start.go:466] detecting cgroup driver to use...
	I0911 12:08:42.855187 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:08:42.871553 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:08:42.886253 2255048 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:08:42.886341 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:08:42.902211 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:08:42.917991 2255048 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:08:43.043679 2255048 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:08:43.182633 2255048 docker.go:212] disabling docker service ...
	I0911 12:08:43.182709 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:08:43.200269 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:08:43.216232 2255048 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:08:43.338376 2255048 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:08:43.460730 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:08:43.478083 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:08:43.499948 2255048 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:08:43.500018 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.513007 2255048 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:08:43.513098 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.526435 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.539976 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.553967 2255048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:08:43.568765 2255048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:08:43.580392 2255048 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:08:43.580481 2255048 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:08:43.599784 2255048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:08:43.612160 2255048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:08:43.725608 2255048 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:08:43.930261 2255048 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:08:43.930390 2255048 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:08:43.937749 2255048 start.go:534] Will wait 60s for crictl version
	I0911 12:08:43.937875 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:43.942818 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:08:43.986093 2255048 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:08:43.986210 2255048 ssh_runner.go:195] Run: crio --version
	I0911 12:08:44.042887 2255048 ssh_runner.go:195] Run: crio --version
	I0911 12:08:44.106673 2255048 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:08:45.592797 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:45.592855 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:45.592874 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:45.637810 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:45.637846 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:45.940997 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:45.947826 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:08:45.947867 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:08:46.440462 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:46.449732 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:08:46.449772 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:08:46.940777 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:46.946988 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 200:
	ok
	I0911 12:08:46.957787 2255814 api_server.go:141] control plane version: v1.28.1
	I0911 12:08:46.957832 2255814 api_server.go:131] duration metric: took 6.519019358s to wait for apiserver health ...
	I0911 12:08:46.957845 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:08:46.957854 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:46.960358 2255814 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:08:43.138628 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:45.640990 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:46.962120 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:08:46.987804 2255814 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:08:47.021845 2255814 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:47.042508 2255814 system_pods.go:59] 8 kube-system pods found
	I0911 12:08:47.042560 2255814 system_pods.go:61] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:08:47.042575 2255814 system_pods.go:61] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:08:47.042585 2255814 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:08:47.042600 2255814 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:08:47.042612 2255814 system_pods.go:61] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:08:47.042624 2255814 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:08:47.042641 2255814 system_pods.go:61] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:08:47.042652 2255814 system_pods.go:61] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:08:47.042663 2255814 system_pods.go:74] duration metric: took 20.787272ms to wait for pod list to return data ...
	I0911 12:08:47.042677 2255814 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:47.048412 2255814 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:47.048524 2255814 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:47.048547 2255814 node_conditions.go:105] duration metric: took 5.861231ms to run NodePressure ...
	I0911 12:08:47.048595 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:47.550933 2255814 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:08:47.556511 2255814 kubeadm.go:787] kubelet initialised
	I0911 12:08:47.556543 2255814 kubeadm.go:788] duration metric: took 5.579487ms waiting for restarted kubelet to initialise ...
	I0911 12:08:47.556554 2255814 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:47.563694 2255814 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.569943 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.569975 2255814 pod_ready.go:81] duration metric: took 6.244443ms waiting for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.569986 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.570001 2255814 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.576703 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.576777 2255814 pod_ready.go:81] duration metric: took 6.7656ms waiting for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.576791 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.576805 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.587740 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.587788 2255814 pod_ready.go:81] duration metric: took 10.95451ms waiting for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.587813 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.587833 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.596430 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.596468 2255814 pod_ready.go:81] duration metric: took 8.617854ms waiting for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.596481 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.596492 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.956009 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-proxy-ldgjr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.956047 2255814 pod_ready.go:81] duration metric: took 359.546333ms waiting for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.956060 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-proxy-ldgjr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.956078 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:44.108577 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:44.112208 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:44.112736 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:44.112782 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:44.113074 2255048 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0911 12:08:44.119517 2255048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:44.140305 2255048 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:08:44.140398 2255048 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:44.184487 2255048 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:08:44.184529 2255048 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 12:08:44.184600 2255048 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.184910 2255048 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.185117 2255048 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.185240 2255048 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.185366 2255048 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.185790 2255048 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.185987 2255048 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0911 12:08:44.186471 2255048 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.186856 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.186943 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.187105 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.187306 2255048 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.187523 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.187570 2255048 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0911 12:08:44.188031 2255048 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.188698 2255048 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.350727 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.351429 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.353625 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.356576 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.374129 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0911 12:08:44.385524 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.410764 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.472311 2255048 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0911 12:08:44.472382 2255048 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.472453 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.572121 2255048 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0911 12:08:44.572186 2255048 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.572258 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.589426 2255048 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0911 12:08:44.589558 2255048 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.589492 2255048 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0911 12:08:44.589638 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.589643 2255048 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.589692 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691568 2255048 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0911 12:08:44.691627 2255048 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.691657 2255048 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0911 12:08:44.691734 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.691767 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.691749 2255048 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.691816 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691705 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691943 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.691955 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.729362 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.778025 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0911 12:08:44.778152 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0911 12:08:44.778215 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:44.778280 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.799788 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0911 12:08:44.799952 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:08:44.799997 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.800112 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.800183 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0911 12:08:44.800283 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:44.851138 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0911 12:08:44.851174 2255048 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.851192 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0911 12:08:44.851227 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0911 12:08:44.851239 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.851141 2255048 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0911 12:08:44.851363 2255048 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.851430 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.896214 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0911 12:08:44.896261 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0911 12:08:44.896310 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0911 12:08:44.896376 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:44.896377 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:08:46.231671 2255048 ssh_runner.go:235] Completed: which crictl: (1.380174214s)
	I0911 12:08:46.231732 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1: (1.33531707s)
	I0911 12:08:46.231734 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.38044194s)
	I0911 12:08:46.231760 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0911 12:08:46.231767 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0911 12:08:46.231780 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:46.231781 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:46.231821 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:46.231777 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1: (1.335378451s)
	I0911 12:08:46.231904 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0911 12:08:48.356501 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.356547 2255814 pod_ready.go:81] duration metric: took 400.453753ms waiting for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:48.356563 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.356575 2255814 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:48.756718 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.756761 2255814 pod_ready.go:81] duration metric: took 400.17438ms waiting for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:48.756775 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.756786 2255814 pod_ready.go:38] duration metric: took 1.200219314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:48.756806 2255814 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:08:48.775561 2255814 ops.go:34] apiserver oom_adj: -16
	I0911 12:08:48.775587 2255814 kubeadm.go:640] restartCluster took 22.189536767s
	I0911 12:08:48.775598 2255814 kubeadm.go:406] StartCluster complete in 22.23955062s
	I0911 12:08:48.775621 2255814 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:48.775713 2255814 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:08:48.778091 2255814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:48.778397 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:08:48.778424 2255814 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:08:48.778566 2255814 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.778597 2255814 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-484027"
	W0911 12:08:48.778614 2255814 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:08:48.778648 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:48.778696 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.778718 2255814 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.778734 2255814 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-484027"
	I0911 12:08:48.779141 2255814 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.779145 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779159 2255814 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-484027"
	W0911 12:08:48.779167 2255814 addons.go:240] addon metrics-server should already be in state true
	I0911 12:08:48.779173 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.779207 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.779289 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779343 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.779509 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779556 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.786929 2255814 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-484027" context rescaled to 1 replicas
	I0911 12:08:48.786996 2255814 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:08:48.789204 2255814 out.go:177] * Verifying Kubernetes components...
	I0911 12:08:48.790973 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:48.799774 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I0911 12:08:48.800366 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.800462 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0911 12:08:48.801065 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.801286 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.801312 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.802064 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.802091 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.802105 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0911 12:08:48.802166 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.802495 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.802706 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.802842 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.802873 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.802873 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.803804 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.803827 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.804437 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.805108 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.805156 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.823113 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0911 12:08:48.823705 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.824347 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.824378 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.824848 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.825073 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.827337 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.827355 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43649
	I0911 12:08:48.830403 2255814 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:48.827726 2255814 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-484027"
	I0911 12:08:48.828116 2255814 main.go:141] libmachine: () Calling .GetVersion
	W0911 12:08:48.832240 2255814 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:08:48.832297 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.832351 2255814 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:48.832372 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:08:48.832404 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.832767 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.832846 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.833819 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.833843 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.834348 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.834583 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.836499 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.837953 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.838586 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.838616 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.838662 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.838863 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.839009 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.839383 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.848085 2255814 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:08:48.850041 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:08:48.850077 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:08:48.850117 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.853766 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.854286 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.854324 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.854695 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.855024 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.855222 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.855427 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.857253 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
	I0911 12:08:48.858013 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.858572 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.858593 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.858922 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.859424 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.859461 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.877066 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0911 12:08:48.877762 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.878430 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.878451 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.878986 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.879214 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.881452 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.881771 2255814 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:48.881790 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:08:48.881810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.885826 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.886380 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.886406 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.887000 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.887261 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.887456 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.887604 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.990643 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:49.087344 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:08:49.087379 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:08:49.088448 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:49.172284 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:08:49.172325 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:08:49.284010 2255814 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-484027" to be "Ready" ...
	I0911 12:08:49.284396 2255814 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 12:08:49.296054 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:49.296086 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:08:49.379706 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:51.018731 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.028036666s)
	I0911 12:08:51.018796 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.018810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.018733 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.930229373s)
	I0911 12:08:51.018900 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.018920 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.019201 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.019252 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.019291 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.019304 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.019315 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.019325 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.019420 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.019433 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.019445 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.019457 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.021142 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.021184 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.021199 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021204 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021223 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021223 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021238 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.021259 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.021542 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021615 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021683 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.122492 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.742646501s)
	I0911 12:08:51.122563 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.122582 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.123183 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.123183 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.123214 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.123224 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.123232 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.123668 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.123713 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.123729 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.123743 2255814 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-484027"
	I0911 12:08:51.126333 2255814 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0911 12:08:48.273682 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:50.640588 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:51.128042 2255814 addons.go:502] enable addons completed in 2.34962006s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0911 12:08:51.299348 2255814 node_ready.go:58] node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:49.857883 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (3.62602487s)
	I0911 12:08:49.857920 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0911 12:08:49.857935 2255048 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:49.858008 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:49.858007 2255048 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.626200516s)
	I0911 12:08:49.858128 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0911 12:08:49.858215 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:08:53.140732 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:55.639106 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:53.799851 2255814 node_ready.go:58] node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:56.661585 2255814 node_ready.go:49] node "default-k8s-diff-port-484027" has status "Ready":"True"
	I0911 12:08:56.661621 2255814 node_ready.go:38] duration metric: took 7.377564832s waiting for node "default-k8s-diff-port-484027" to be "Ready" ...
	I0911 12:08:56.661651 2255814 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:56.675600 2255814 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.686880 2255814 pod_ready.go:92] pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:56.686977 2255814 pod_ready.go:81] duration metric: took 11.34453ms waiting for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.687027 2255814 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.695897 2255814 pod_ready.go:92] pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:56.695991 2255814 pod_ready.go:81] duration metric: took 8.931143ms waiting for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.696011 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:57.305638 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (7.447392742s)
	I0911 12:08:57.305689 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0911 12:08:57.305809 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.447768556s)
	I0911 12:08:57.305836 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0911 12:08:57.305855 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:57.305932 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:58.142333 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:00.644281 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:58.721936 2255814 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.721964 2255814 pod_ready.go:81] duration metric: took 2.025944093s waiting for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.721978 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.728483 2255814 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.728509 2255814 pod_ready.go:81] duration metric: took 6.525188ms waiting for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.728522 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.868777 2255814 pod_ready.go:92] pod "kube-proxy-ldgjr" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.868821 2255814 pod_ready.go:81] duration metric: took 140.280926ms waiting for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.868839 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:59.266668 2255814 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:59.266699 2255814 pod_ready.go:81] duration metric: took 397.852252ms waiting for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:59.266710 2255814 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:01.578711 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:00.172738 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.866760661s)
	I0911 12:09:00.172779 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0911 12:09:00.172904 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:09:00.172989 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:09:01.745988 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.572965994s)
	I0911 12:09:01.746029 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0911 12:09:01.746047 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:09:01.746105 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:09:03.140327 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:05.141268 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:04.080056 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:06.578690 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:03.814358 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (2.068208039s)
	I0911 12:09:03.814432 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0911 12:09:03.814452 2255048 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:09:03.814516 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:09:04.982461 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.167909383s)
	I0911 12:09:04.982505 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0911 12:09:04.982542 2255048 cache_images.go:123] Successfully loaded all cached images
	I0911 12:09:04.982549 2255048 cache_images.go:92] LoadImages completed in 20.798002598s
	I0911 12:09:04.982647 2255048 ssh_runner.go:195] Run: crio config
	I0911 12:09:05.047992 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:09:05.048024 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:09:05.048049 2255048 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:09:05.048070 2255048 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.157 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-352076 NodeName:no-preload-352076 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:09:05.048268 2255048 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-352076"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.157
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.157"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:09:05.048352 2255048 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-352076 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-352076 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:09:05.048427 2255048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:09:05.060720 2255048 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:09:05.060881 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:09:05.072228 2255048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0911 12:09:05.093943 2255048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:09:05.113383 2255048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0911 12:09:05.136859 2255048 ssh_runner.go:195] Run: grep 192.168.72.157	control-plane.minikube.internal$ /etc/hosts
	I0911 12:09:05.143807 2255048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.157	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:09:05.160629 2255048 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076 for IP: 192.168.72.157
	I0911 12:09:05.160686 2255048 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:09:05.161057 2255048 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:09:05.161131 2255048 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:09:05.161253 2255048 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.key
	I0911 12:09:05.161367 2255048 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.key.66fc92c5
	I0911 12:09:05.161447 2255048 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.key
	I0911 12:09:05.161605 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:09:05.161646 2255048 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:09:05.161655 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:09:05.161696 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:09:05.161745 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:09:05.161773 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:09:05.161838 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:09:05.162864 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:09:05.196273 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 12:09:05.226310 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:09:05.259094 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:09:05.296329 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:09:05.329537 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:09:05.363893 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:09:05.398183 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:09:05.431986 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:09:05.462584 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:09:05.494047 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:09:05.531243 2255048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:09:05.554858 2255048 ssh_runner.go:195] Run: openssl version
	I0911 12:09:05.564158 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:09:05.578611 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.585480 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.585563 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.592835 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:09:05.606413 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:09:05.618978 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.626101 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.626179 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.634526 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:09:05.648394 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:09:05.664598 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.671632 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.671734 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.679143 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:09:05.691797 2255048 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:09:05.698734 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:09:05.706797 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:09:05.713927 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:09:05.721394 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:09:05.728652 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:09:05.736364 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:09:05.744505 2255048 kubeadm.go:404] StartCluster: {Name:no-preload-352076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-352076 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:09:05.744673 2255048 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:09:05.744751 2255048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:09:05.783568 2255048 cri.go:89] found id: ""
	I0911 12:09:05.783665 2255048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:09:05.794403 2255048 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:09:05.794443 2255048 kubeadm.go:636] restartCluster start
	I0911 12:09:05.794532 2255048 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:09:05.808458 2255048 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:05.809808 2255048 kubeconfig.go:92] found "no-preload-352076" server: "https://192.168.72.157:8443"
	I0911 12:09:05.812541 2255048 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:09:05.824406 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:05.824488 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:05.838004 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:05.838029 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:05.838081 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:05.850725 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:06.351553 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:06.351683 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:06.365583 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:06.851068 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:06.851203 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:06.865829 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.351654 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:07.351840 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:07.365462 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.851109 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:07.851227 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:07.865132 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:08.351854 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:08.351980 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:08.364980 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.637342 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:09.637899 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:11.638591 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:09.078188 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:11.575790 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:08.850933 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:08.851079 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:08.865313 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:09.350825 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:09.350918 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:09.363633 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:09.850908 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:09.851009 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:09.864051 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:10.351371 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:10.351459 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:10.364187 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:10.851868 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:10.851993 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:10.865706 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:11.351327 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:11.351445 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:11.364860 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:11.851490 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:11.851579 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:11.865090 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:12.351698 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:12.351841 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:12.365554 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:12.851082 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:12.851189 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:12.863359 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:13.351652 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:13.351762 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:13.364220 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:13.638913 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:16.138385 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:14.075701 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:16.083424 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:13.851558 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:13.851650 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:13.864548 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:14.351104 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:14.351196 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:14.363567 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:14.851181 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:14.851287 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:14.865371 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:15.351813 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:15.351921 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:15.364728 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:15.825491 2255048 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:09:15.825532 2255048 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:09:15.825549 2255048 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:09:15.825628 2255048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:09:15.863098 2255048 cri.go:89] found id: ""
	I0911 12:09:15.863207 2255048 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:09:15.881673 2255048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:09:15.892264 2255048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:09:15.892363 2255048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:09:15.903142 2255048 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:09:15.903168 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:16.075542 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.073042 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.305269 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.399770 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.484630 2255048 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:09:17.484713 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:17.502746 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:18.017919 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:18.139562 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:20.643130 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:18.578074 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:21.077490 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:18.517850 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:19.018007 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:19.518125 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:20.018229 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:20.062967 2255048 api_server.go:72] duration metric: took 2.578334133s to wait for apiserver process to appear ...
	I0911 12:09:20.062999 2255048 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:09:20.063024 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:20.063765 2255048 api_server.go:269] stopped: https://192.168.72.157:8443/healthz: Get "https://192.168.72.157:8443/healthz": dial tcp 192.168.72.157:8443: connect: connection refused
	I0911 12:09:20.063812 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:20.064348 2255048 api_server.go:269] stopped: https://192.168.72.157:8443/healthz: Get "https://192.168.72.157:8443/healthz": dial tcp 192.168.72.157:8443: connect: connection refused
	I0911 12:09:20.564847 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.276251 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:09:24.276297 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:09:24.276314 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.320049 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:09:24.320081 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:09:24.564444 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.570484 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:09:24.570524 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:09:25.064830 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:25.071229 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:09:25.071269 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:09:25.564901 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:25.570887 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
	ok
	I0911 12:09:25.580713 2255048 api_server.go:141] control plane version: v1.28.1
	I0911 12:09:25.580746 2255048 api_server.go:131] duration metric: took 5.517738896s to wait for apiserver health ...
	I0911 12:09:25.580759 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:09:25.580768 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:09:25.583425 2255048 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:09:23.139085 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.140565 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:23.077522 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.576471 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.585300 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:09:25.610742 2255048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:09:25.660757 2255048 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:09:25.680043 2255048 system_pods.go:59] 8 kube-system pods found
	I0911 12:09:25.680087 2255048 system_pods.go:61] "coredns-5dd5756b68-mghg7" [380c0d4e-d7e3-4434-9757-f4debc5206d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:09:25.680104 2255048 system_pods.go:61] "etcd-no-preload-352076" [4f74cb61-25fb-4478-afd4-3b0f0ef1bdae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:09:25.680115 2255048 system_pods.go:61] "kube-apiserver-no-preload-352076" [09ed0349-f0dc-4ab0-b057-230daeb8e7c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:09:25.680127 2255048 system_pods.go:61] "kube-controller-manager-no-preload-352076" [c93ec6ac-408b-4859-b45b-79bb3e3b53d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:09:25.680142 2255048 system_pods.go:61] "kube-proxy-f748l" [8379d15e-e886-48cb-8a53-3a48aef7c9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:09:25.680157 2255048 system_pods.go:61] "kube-scheduler-no-preload-352076" [7e7068d1-7f6b-4fe7-b1f4-73ddab4c7db4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:09:25.680174 2255048 system_pods.go:61] "metrics-server-57f55c9bc5-tvrkk" [7b463025-d2f8-4f1d-aa69-740cd828c1d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:09:25.680188 2255048 system_pods.go:61] "storage-provisioner" [52928c2e-1383-41b0-817c-203d016da7df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:09:25.680201 2255048 system_pods.go:74] duration metric: took 19.417405ms to wait for pod list to return data ...
	I0911 12:09:25.680220 2255048 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:09:25.685088 2255048 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:09:25.685127 2255048 node_conditions.go:123] node cpu capacity is 2
	I0911 12:09:25.685144 2255048 node_conditions.go:105] duration metric: took 4.914847ms to run NodePressure ...
	I0911 12:09:25.685170 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:26.127026 2255048 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:09:26.137211 2255048 kubeadm.go:787] kubelet initialised
	I0911 12:09:26.137247 2255048 kubeadm.go:788] duration metric: took 10.126758ms waiting for restarted kubelet to initialise ...
	I0911 12:09:26.137258 2255048 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:09:26.144732 2255048 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:28.168555 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:27.637951 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.142107 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:32.144784 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:28.078707 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.575535 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:32.575917 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.169198 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:31.168599 2255048 pod_ready.go:92] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:31.168623 2255048 pod_ready.go:81] duration metric: took 5.02386193s waiting for pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:31.168633 2255048 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:32.194954 2255048 pod_ready.go:92] pod "etcd-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:32.194986 2255048 pod_ready.go:81] duration metric: took 1.026346965s waiting for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:32.194997 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:33.218527 2255048 pod_ready.go:92] pod "kube-apiserver-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:33.218555 2255048 pod_ready.go:81] duration metric: took 1.02355184s waiting for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:33.218568 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:34.637330 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:36.638472 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:34.577030 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:37.076594 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:35.576857 2255048 pod_ready.go:102] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:38.072765 2255048 pod_ready.go:92] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.072791 2255048 pod_ready.go:81] duration metric: took 4.854217828s waiting for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.072807 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f748l" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.080177 2255048 pod_ready.go:92] pod "kube-proxy-f748l" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.080219 2255048 pod_ready.go:81] duration metric: took 7.386736ms waiting for pod "kube-proxy-f748l" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.080234 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.086910 2255048 pod_ready.go:92] pod "kube-scheduler-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.086935 2255048 pod_ready.go:81] duration metric: took 6.692353ms waiting for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.086947 2255048 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:39.139899 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:41.638556 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:39.076977 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:41.077356 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:40.275588 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:42.279343 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:44.140467 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.638950 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:43.575930 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.075946 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:44.773655 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.773783 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.639947 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:51.136953 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.076228 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:50.076280 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:52.575191 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.781871 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:51.276719 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:53.137841 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:55.639201 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:54.575724 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:56.577539 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:53.774303 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:55.775398 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:57.776172 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:58.137820 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:00.140032 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:59.075343 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:01.077352 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:00.274288 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:02.281024 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:02.637659 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:04.638359 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:07.138194 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:03.576039 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:05.581746 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:04.774609 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:06.777649 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:09.638158 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:12.138452 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:08.086089 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:10.577034 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:09.274229 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:11.773772 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:14.637905 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:17.137141 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:13.075497 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:15.075928 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:17.077025 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:13.777087 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:16.273244 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:18.274393 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:19.138225 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:21.638206 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:19.574944 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:21.577126 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:20.274987 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:22.774026 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:23.638427 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:25.639796 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:24.077660 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:26.576065 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:25.274996 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:27.773877 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:28.143807 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:30.639138 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:28.576550 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:31.076503 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:29.775191 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:32.275040 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:33.137429 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:35.137961 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:37.141067 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:33.575704 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:35.576704 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:34.773882 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:36.774534 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:39.637647 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:41.639902 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:38.076297 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:40.577008 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:38.774671 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:41.274312 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:43.274935 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:44.137187 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:46.141314 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:43.079758 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:45.589530 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:45.774930 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.273321 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.638868 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:51.139417 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.076212 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:50.078989 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:52.575259 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:50.274454 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:52.275086 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:53.637980 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:55.638403 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:54.575452 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:56.575714 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:54.777442 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:57.273658 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:58.136668 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:00.137799 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:59.077541 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:01.576462 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:59.275476 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:01.773680 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:02.636537 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:04.637865 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:07.136712 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:04.078863 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:06.577886 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:03.776995 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:06.274574 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:08.275266 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:09.137886 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:11.147508 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:09.075793 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:11.575828 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:10.275357 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:12.775241 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:13.638603 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:16.137986 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:14.076435 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:16.078427 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:15.275325 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:17.275446 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:18.138511 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:20.638477 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:18.575789 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:20.575987 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:22.576545 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:19.774865 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:22.280364 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:23.138801 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:25.638249 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:24.577693 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:26.581497 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:24.774606 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:27.274878 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:27.639126 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.640834 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:32.138497 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.079788 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:31.575364 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.774769 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:31.777925 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:34.636906 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:36.640855 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:33.576041 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:35.577513 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:34.275601 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:36.282120 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:39.138445 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:41.638724 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:38.074500 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:40.077237 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:42.078135 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:38.774882 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:40.776485 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:43.277653 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:43.639224 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:46.137265 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:44.574433 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:46.576378 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:45.776572 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.275210 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.137470 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:50.638249 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.580531 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:51.076018 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:50.775117 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:52.775535 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:52.641468 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.138561 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.138875 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:53.078788 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.079529 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.577003 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.274582 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.774611 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:59.637786 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:01.644407 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:00.075246 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:02.078022 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:00.274022 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:02.275711 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.137692 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.614957 2255187 pod_ready.go:81] duration metric: took 4m0.000726123s waiting for pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace to be "Ready" ...
	E0911 12:12:04.614999 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:12:04.615020 2255187 pod_ready.go:38] duration metric: took 4m6.604014313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:12:04.615056 2255187 kubeadm.go:640] restartCluster took 4m25.597873734s
	W0911 12:12:04.615156 2255187 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0911 12:12:04.615268 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0911 12:12:04.576764 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:06.579533 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.779450 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:07.276202 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:08.580439 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:11.075465 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:09.277634 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:11.776920 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:13.076473 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:15.077335 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:17.574470 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:14.276806 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:16.774759 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:19.576080 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:22.078686 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:18.775173 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:21.274723 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:23.276576 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:24.082590 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:26.584485 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:25.277284 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:27.774953 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:29.079400 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:31.575879 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:30.278194 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:32.773872 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:37.434471 2255187 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.819147659s)
	I0911 12:12:37.434634 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:12:37.450370 2255187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:12:37.463019 2255187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:12:37.473313 2255187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:12:37.473375 2255187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:12:33.578208 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:36.076227 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:34.775135 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:36.775239 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:37.703004 2255187 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:12:38.574884 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:40.577027 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:38.779298 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:41.274039 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:43.076990 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:45.077566 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:47.576057 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:43.775208 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:45.775382 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:48.274401 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:49.022486 2255187 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 12:12:49.022566 2255187 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 12:12:49.022667 2255187 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 12:12:49.022825 2255187 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 12:12:49.022994 2255187 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 12:12:49.023081 2255187 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 12:12:49.025047 2255187 out.go:204]   - Generating certificates and keys ...
	I0911 12:12:49.025151 2255187 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 12:12:49.025249 2255187 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 12:12:49.025340 2255187 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0911 12:12:49.025428 2255187 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0911 12:12:49.025521 2255187 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0911 12:12:49.025599 2255187 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0911 12:12:49.025703 2255187 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0911 12:12:49.025801 2255187 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0911 12:12:49.025898 2255187 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0911 12:12:49.026021 2255187 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0911 12:12:49.026083 2255187 kubeadm.go:322] [certs] Using the existing "sa" key
	I0911 12:12:49.026163 2255187 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 12:12:49.026252 2255187 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 12:12:49.026338 2255187 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 12:12:49.026436 2255187 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 12:12:49.026518 2255187 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 12:12:49.026609 2255187 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 12:12:49.026694 2255187 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 12:12:49.028378 2255187 out.go:204]   - Booting up control plane ...
	I0911 12:12:49.028469 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 12:12:49.028538 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 12:12:49.028632 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 12:12:49.028759 2255187 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 12:12:49.028894 2255187 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 12:12:49.028960 2255187 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 12:12:49.029126 2255187 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 12:12:49.029225 2255187 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504895 seconds
	I0911 12:12:49.029346 2255187 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:12:49.029485 2255187 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:12:49.029568 2255187 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:12:49.029801 2255187 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-235462 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:12:49.029864 2255187 kubeadm.go:322] [bootstrap-token] Using token: u1pjdn.ynd5x30gs2d5ngse
	I0911 12:12:49.031514 2255187 out.go:204]   - Configuring RBAC rules ...
	I0911 12:12:49.031635 2255187 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:12:49.031766 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:12:49.031961 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:12:49.032100 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:12:49.032234 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:12:49.032370 2255187 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:12:49.032513 2255187 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:12:49.032569 2255187 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:12:49.032641 2255187 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:12:49.032653 2255187 kubeadm.go:322] 
	I0911 12:12:49.032721 2255187 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:12:49.032733 2255187 kubeadm.go:322] 
	I0911 12:12:49.032850 2255187 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:12:49.032862 2255187 kubeadm.go:322] 
	I0911 12:12:49.032897 2255187 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:12:49.032954 2255187 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:12:49.033027 2255187 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:12:49.033034 2255187 kubeadm.go:322] 
	I0911 12:12:49.033113 2255187 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:12:49.033125 2255187 kubeadm.go:322] 
	I0911 12:12:49.033185 2255187 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:12:49.033194 2255187 kubeadm.go:322] 
	I0911 12:12:49.033272 2255187 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:12:49.033364 2255187 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:12:49.033478 2255187 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:12:49.033488 2255187 kubeadm.go:322] 
	I0911 12:12:49.033592 2255187 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:12:49.033674 2255187 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:12:49.033681 2255187 kubeadm.go:322] 
	I0911 12:12:49.033793 2255187 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token u1pjdn.ynd5x30gs2d5ngse \
	I0911 12:12:49.033940 2255187 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:12:49.033981 2255187 kubeadm.go:322] 	--control-plane 
	I0911 12:12:49.033994 2255187 kubeadm.go:322] 
	I0911 12:12:49.034117 2255187 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:12:49.034140 2255187 kubeadm.go:322] 
	I0911 12:12:49.034253 2255187 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token u1pjdn.ynd5x30gs2d5ngse \
	I0911 12:12:49.034398 2255187 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:12:49.034424 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:12:49.034438 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:12:49.036358 2255187 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:12:49.037952 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:12:49.078613 2255187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:12:49.171320 2255187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:12:49.171458 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.171492 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=embed-certs-235462 minikube.k8s.io/updated_at=2023_09_11T12_12_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.227806 2255187 ops.go:34] apiserver oom_adj: -16
	I0911 12:12:49.533909 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.637357 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:50.234909 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:50.734249 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:51.234928 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:51.734543 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:52.235022 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.576947 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.075970 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:50.275288 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.775973 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.734323 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:53.234558 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:53.734598 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.235197 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.734524 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:55.234539 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:55.734806 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:56.234833 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:56.734868 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:57.235336 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.574674 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:56.577723 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:54.777705 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:57.274282 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:57.735164 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:58.234340 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:58.734332 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:59.234884 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:59.734265 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:13:00.234310 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:13:00.376532 2255187 kubeadm.go:1081] duration metric: took 11.205145428s to wait for elevateKubeSystemPrivileges.
	I0911 12:13:00.376577 2255187 kubeadm.go:406] StartCluster complete in 5m21.403889838s
	I0911 12:13:00.376632 2255187 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:13:00.376754 2255187 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:13:00.379195 2255187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:13:00.379496 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:13:00.379604 2255187 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:13:00.379714 2255187 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-235462"
	I0911 12:13:00.379735 2255187 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-235462"
	W0911 12:13:00.379744 2255187 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:13:00.379770 2255187 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:13:00.379813 2255187 addons.go:69] Setting default-storageclass=true in profile "embed-certs-235462"
	I0911 12:13:00.379829 2255187 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-235462"
	I0911 12:13:00.379872 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.380021 2255187 addons.go:69] Setting metrics-server=true in profile "embed-certs-235462"
	I0911 12:13:00.380038 2255187 addons.go:231] Setting addon metrics-server=true in "embed-certs-235462"
	W0911 12:13:00.380053 2255187 addons.go:240] addon metrics-server should already be in state true
	I0911 12:13:00.380092 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.380276 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380299 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.380314 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380338 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.380443 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380464 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.400206 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0911 12:13:00.400222 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0911 12:13:00.400384 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35671
	I0911 12:13:00.400955 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.400990 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.400957 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.401597 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.401619 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.401749 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.401769 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.402081 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402237 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.402249 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.402314 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402602 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402785 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.402950 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.402972 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.402986 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.403016 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.424319 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I0911 12:13:00.424352 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I0911 12:13:00.424991 2255187 addons.go:231] Setting addon default-storageclass=true in "embed-certs-235462"
	W0911 12:13:00.425015 2255187 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:13:00.425039 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.425053 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.425387 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.425471 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.425496 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.425891 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.425904 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.426206 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.426222 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.426644 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.426842 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.428151 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.429014 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.431494 2255187 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:13:00.429852 2255187 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-235462" context rescaled to 1 replicas
	I0911 12:13:00.430039 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.433081 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:13:00.433096 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:13:00.433121 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.433184 2255187 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:13:00.438048 2255187 out.go:177] * Verifying Kubernetes components...
	I0911 12:13:00.436324 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.437532 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.438207 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.442076 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:00.442211 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.442240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.443931 2255187 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:13:00.442451 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.445563 2255187 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:13:00.445579 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.445583 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:13:00.445606 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.445674 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.449267 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0911 12:13:00.449534 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.449823 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.450240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.450270 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.450451 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.450818 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.450838 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.450906 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.451120 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.451298 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.452043 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.452652 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.452686 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.470652 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0911 12:13:00.471240 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.471865 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.471888 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.472326 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.472745 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.474485 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.475072 2255187 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:13:00.475093 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:13:00.475123 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.478333 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.478757 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.478788 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.478949 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.479157 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.479301 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.479434 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.601913 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:13:00.601946 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:13:00.629483 2255187 node_ready.go:35] waiting up to 6m0s for node "embed-certs-235462" to be "Ready" ...
	I0911 12:13:00.629938 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 12:13:00.651067 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:13:00.653504 2255187 node_ready.go:49] node "embed-certs-235462" has status "Ready":"True"
	I0911 12:13:00.653549 2255187 node_ready.go:38] duration metric: took 24.023395ms waiting for node "embed-certs-235462" to be "Ready" ...
	I0911 12:13:00.653564 2255187 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:00.663033 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:13:00.663075 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:13:00.668515 2255187 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.709787 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:13:00.751534 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:13:00.751565 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:13:00.782859 2255187 pod_ready.go:92] pod "etcd-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:00.782894 2255187 pod_ready.go:81] duration metric: took 114.332855ms waiting for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.782910 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.823512 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:13:00.891619 2255187 pod_ready.go:92] pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:00.891678 2255187 pod_ready.go:81] duration metric: took 108.758908ms waiting for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.891695 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.001447 2255187 pod_ready.go:92] pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:01.001483 2255187 pod_ready.go:81] duration metric: took 109.778603ms waiting for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.001501 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.164166 2255187 pod_ready.go:92] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:01.164205 2255187 pod_ready.go:81] duration metric: took 162.694687ms waiting for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.164216 2255187 pod_ready.go:38] duration metric: took 510.637428ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:01.164239 2255187 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:13:01.164300 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:12:59.081781 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:59.267524 2255814 pod_ready.go:81] duration metric: took 4m0.000791617s waiting for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	E0911 12:12:59.267566 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:12:59.267580 2255814 pod_ready.go:38] duration metric: took 4m2.605912471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:12:59.267603 2255814 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:12:59.267645 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:12:59.267855 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:12:59.332014 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:12:59.332042 2255814 cri.go:89] found id: ""
	I0911 12:12:59.332053 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:12:59.332135 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.338400 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:12:59.338493 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:12:59.373232 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:12:59.373284 2255814 cri.go:89] found id: ""
	I0911 12:12:59.373296 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:12:59.373371 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.379199 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:12:59.379288 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:12:59.415804 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:12:59.415840 2255814 cri.go:89] found id: ""
	I0911 12:12:59.415852 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:12:59.415940 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.422256 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:12:59.422343 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:12:59.462300 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:12:59.462327 2255814 cri.go:89] found id: ""
	I0911 12:12:59.462336 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:12:59.462392 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.467244 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:12:59.467364 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:12:59.499594 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:12:59.499619 2255814 cri.go:89] found id: ""
	I0911 12:12:59.499627 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:12:59.499697 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.504481 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:12:59.504570 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:12:59.536588 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:12:59.536620 2255814 cri.go:89] found id: ""
	I0911 12:12:59.536631 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:12:59.536701 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.541454 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:12:59.541529 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:12:59.577953 2255814 cri.go:89] found id: ""
	I0911 12:12:59.577990 2255814 logs.go:284] 0 containers: []
	W0911 12:12:59.578001 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:12:59.578010 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:12:59.578084 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:12:59.616256 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:12:59.616283 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:12:59.616288 2255814 cri.go:89] found id: ""
	I0911 12:12:59.616296 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:12:59.616350 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.621818 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.627431 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:12:59.627462 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:12:59.690633 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:12:59.690681 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:12:59.733084 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:12:59.733133 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:12:59.775174 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:12:59.775220 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:12:59.829438 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:12:59.829492 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:12:59.894842 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:12:59.894888 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:12:59.936662 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:12:59.936703 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:12:59.955507 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:12:59.955544 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:00.127082 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:00.127129 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:00.178458 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:00.178501 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:00.226759 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:00.226805 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:00.267586 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:00.267637 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:00.311431 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:00.311465 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:12:59.276905 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:01.775061 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:02.733813 2255187 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.103819607s)
	I0911 12:13:02.733859 2255187 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0911 12:13:03.298110 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.646997747s)
	I0911 12:13:03.298169 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298179 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298209 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.588380755s)
	I0911 12:13:03.298256 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298278 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298545 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298566 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.298577 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298586 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298596 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298611 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.298622 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298834 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298851 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.298891 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298904 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.299077 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.299104 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.299117 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.299127 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.299083 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.299459 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.299474 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.485702 2255187 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.321356388s)
	I0911 12:13:03.485741 2255187 api_server.go:72] duration metric: took 3.052522714s to wait for apiserver process to appear ...
	I0911 12:13:03.485748 2255187 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:13:03.485768 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:13:03.485987 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.66240811s)
	I0911 12:13:03.486070 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.486090 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.486553 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.486621 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.486642 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.486666 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.486683 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.486940 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.486956 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.486968 2255187 addons.go:467] Verifying addon metrics-server=true in "embed-certs-235462"
	I0911 12:13:03.489450 2255187 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0911 12:13:03.491514 2255187 addons.go:502] enable addons completed in 3.11190942s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0911 12:13:03.571696 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 200:
	ok
	I0911 12:13:03.576690 2255187 api_server.go:141] control plane version: v1.28.1
	I0911 12:13:03.576730 2255187 api_server.go:131] duration metric: took 90.974437ms to wait for apiserver health ...
	I0911 12:13:03.576743 2255187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:13:03.592687 2255187 system_pods.go:59] 8 kube-system pods found
	I0911 12:13:03.592734 2255187 system_pods.go:61] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.592745 2255187 system_pods.go:61] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.592753 2255187 system_pods.go:61] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.592761 2255187 system_pods.go:61] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.592769 2255187 system_pods.go:61] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.592778 2255187 system_pods.go:61] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.592787 2255187 system_pods.go:61] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.592802 2255187 system_pods.go:61] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.592839 2255187 system_pods.go:74] duration metric: took 16.087864ms to wait for pod list to return data ...
	I0911 12:13:03.592855 2255187 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:13:03.606427 2255187 default_sa.go:45] found service account: "default"
	I0911 12:13:03.606517 2255187 default_sa.go:55] duration metric: took 13.6536ms for default service account to be created ...
	I0911 12:13:03.606542 2255187 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:13:03.622692 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:03.622752 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.622765 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.622777 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.622786 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.622801 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.622814 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.622980 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.623076 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.623157 2255187 retry.go:31] will retry after 240.25273ms: missing components: kube-dns, kube-proxy
	I0911 12:13:03.874980 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:03.875031 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.875041 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.875048 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.875081 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.875094 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.875104 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.875118 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.875130 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.875163 2255187 retry.go:31] will retry after 285.300702ms: missing components: kube-dns, kube-proxy
	I0911 12:13:04.171503 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:04.171548 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:04.171558 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:04.171566 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:04.171574 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:04.171580 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:04.171587 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:04.171598 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:04.171607 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:04.171632 2255187 retry.go:31] will retry after 386.395514ms: missing components: kube-dns, kube-proxy
	I0911 12:13:04.565931 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:04.565972 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:04.565982 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:04.565991 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:04.565998 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:04.566007 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:04.566015 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:04.566025 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:04.566039 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:04.566062 2255187 retry.go:31] will retry after 526.673ms: missing components: kube-dns, kube-proxy
	I0911 12:13:05.104101 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:05.104230 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:05.104257 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:05.104277 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:05.104294 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:05.104312 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:05.104336 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:05.104353 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:05.104363 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:05.104385 2255187 retry.go:31] will retry after 628.795734ms: missing components: kube-dns, kube-proxy
	I0911 12:13:05.745358 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:05.745392 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Running
	I0911 12:13:05.745400 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:05.745408 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:05.745416 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:05.745421 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Running
	I0911 12:13:05.745427 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:05.745440 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:05.745451 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:05.745463 2255187 system_pods.go:126] duration metric: took 2.138903103s to wait for k8s-apps to be running ...
	I0911 12:13:05.745480 2255187 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:13:05.745540 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:05.762725 2255187 system_svc.go:56] duration metric: took 17.229678ms WaitForService to wait for kubelet.
	I0911 12:13:05.762766 2255187 kubeadm.go:581] duration metric: took 5.329544538s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:13:05.762793 2255187 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:13:05.767056 2255187 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:13:05.767087 2255187 node_conditions.go:123] node cpu capacity is 2
	I0911 12:13:05.767112 2255187 node_conditions.go:105] duration metric: took 4.314286ms to run NodePressure ...
	I0911 12:13:05.767131 2255187 start.go:228] waiting for startup goroutines ...
	I0911 12:13:05.767138 2255187 start.go:233] waiting for cluster config update ...
	I0911 12:13:05.767147 2255187 start.go:242] writing updated cluster config ...
	I0911 12:13:05.767462 2255187 ssh_runner.go:195] Run: rm -f paused
	I0911 12:13:05.823796 2255187 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:13:05.826336 2255187 out.go:177] * Done! kubectl is now configured to use "embed-certs-235462" cluster and "default" namespace by default
	I0911 12:13:03.450576 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:13:03.472433 2255814 api_server.go:72] duration metric: took 4m14.685379298s to wait for apiserver process to appear ...
	I0911 12:13:03.472469 2255814 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:13:03.472520 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:13:03.472614 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:13:03.515433 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:03.515471 2255814 cri.go:89] found id: ""
	I0911 12:13:03.515483 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:13:03.515560 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.521654 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:13:03.521745 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:13:03.569379 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:03.569406 2255814 cri.go:89] found id: ""
	I0911 12:13:03.569416 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:13:03.569481 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.574638 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:13:03.574723 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:13:03.610693 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:03.610722 2255814 cri.go:89] found id: ""
	I0911 12:13:03.610733 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:13:03.610794 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.615774 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:13:03.615894 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:13:03.657087 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:03.657117 2255814 cri.go:89] found id: ""
	I0911 12:13:03.657129 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:13:03.657211 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.662224 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:13:03.662315 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:13:03.698282 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:03.698359 2255814 cri.go:89] found id: ""
	I0911 12:13:03.698381 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:13:03.698466 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.704160 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:13:03.704246 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:13:03.748122 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:03.748152 2255814 cri.go:89] found id: ""
	I0911 12:13:03.748162 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:13:03.748238 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.752657 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:13:03.752742 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:13:03.786815 2255814 cri.go:89] found id: ""
	I0911 12:13:03.786853 2255814 logs.go:284] 0 containers: []
	W0911 12:13:03.786863 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:13:03.786871 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:13:03.786942 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:13:03.824384 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:03.824409 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:03.824414 2255814 cri.go:89] found id: ""
	I0911 12:13:03.824421 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:13:03.824497 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.830317 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.836320 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:03.836355 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:03.887480 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:13:03.887524 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:03.930466 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:13:03.930507 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:03.966522 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:13:03.966563 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:13:04.026111 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:13:04.026168 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:13:04.045422 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:13:04.045468 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:04.185127 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:04.185179 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:04.235047 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:04.235089 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:13:04.856084 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:13:04.856134 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:13:04.903388 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:04.903433 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:04.964861 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:04.964916 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:05.007565 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:13:05.007605 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:05.069630 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:13:05.069676 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:07.608676 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:13:07.615388 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 200:
	ok
	I0911 12:13:07.617076 2255814 api_server.go:141] control plane version: v1.28.1
	I0911 12:13:07.617101 2255814 api_server.go:131] duration metric: took 4.14462443s to wait for apiserver health ...
	I0911 12:13:07.617110 2255814 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:13:07.617138 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:13:07.617196 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:13:07.656726 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:07.656750 2255814 cri.go:89] found id: ""
	I0911 12:13:07.656760 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:13:07.656850 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.661277 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:13:07.661364 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:13:07.697717 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:07.697746 2255814 cri.go:89] found id: ""
	I0911 12:13:07.697754 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:13:07.697842 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.703800 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:13:07.703888 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:13:07.747003 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:07.747033 2255814 cri.go:89] found id: ""
	I0911 12:13:07.747043 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:13:07.747122 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.751932 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:13:07.752007 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:13:07.785348 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:07.785375 2255814 cri.go:89] found id: ""
	I0911 12:13:07.785386 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:13:07.785460 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.790170 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:13:07.790237 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:13:07.827467 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:07.827496 2255814 cri.go:89] found id: ""
	I0911 12:13:07.827510 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:13:07.827583 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.834478 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:13:07.834552 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:13:07.873739 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:07.873766 2255814 cri.go:89] found id: ""
	I0911 12:13:07.873774 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:13:07.873828 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.878424 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:13:07.878528 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:13:07.916665 2255814 cri.go:89] found id: ""
	I0911 12:13:07.916696 2255814 logs.go:284] 0 containers: []
	W0911 12:13:07.916708 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:13:07.916716 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:13:07.916780 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:13:07.950146 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:07.950172 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:07.950177 2255814 cri.go:89] found id: ""
	I0911 12:13:07.950185 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:13:07.950256 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.954996 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.959157 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:07.959189 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:08.027081 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:13:08.027112 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:03.775843 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:06.274500 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:08.079481 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:13:08.079522 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:08.118655 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:13:08.118696 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:13:08.177644 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:13:08.177690 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:13:08.192495 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:13:08.192524 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:08.338344 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:08.338388 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:08.385409 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:08.385454 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:08.420999 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:13:08.421033 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:08.457183 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:13:08.457223 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:13:08.500499 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:08.500531 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:08.550546 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:13:08.550587 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:08.584802 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:08.584854 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:13:11.626627 2255814 system_pods.go:59] 8 kube-system pods found
	I0911 12:13:11.626661 2255814 system_pods.go:61] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running
	I0911 12:13:11.626666 2255814 system_pods.go:61] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running
	I0911 12:13:11.626670 2255814 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running
	I0911 12:13:11.626675 2255814 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running
	I0911 12:13:11.626679 2255814 system_pods.go:61] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running
	I0911 12:13:11.626683 2255814 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running
	I0911 12:13:11.626690 2255814 system_pods.go:61] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:11.626696 2255814 system_pods.go:61] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running
	I0911 12:13:11.626702 2255814 system_pods.go:74] duration metric: took 4.009586477s to wait for pod list to return data ...
	I0911 12:13:11.626710 2255814 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:13:11.630703 2255814 default_sa.go:45] found service account: "default"
	I0911 12:13:11.630735 2255814 default_sa.go:55] duration metric: took 4.019315ms for default service account to be created ...
	I0911 12:13:11.630747 2255814 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:13:11.637643 2255814 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:11.637681 2255814 system_pods.go:89] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running
	I0911 12:13:11.637687 2255814 system_pods.go:89] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running
	I0911 12:13:11.637693 2255814 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running
	I0911 12:13:11.637697 2255814 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running
	I0911 12:13:11.637701 2255814 system_pods.go:89] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running
	I0911 12:13:11.637706 2255814 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running
	I0911 12:13:11.637713 2255814 system_pods.go:89] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:11.637720 2255814 system_pods.go:89] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running
	I0911 12:13:11.637727 2255814 system_pods.go:126] duration metric: took 6.974046ms to wait for k8s-apps to be running ...
	I0911 12:13:11.637734 2255814 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:13:11.637781 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:11.656267 2255814 system_svc.go:56] duration metric: took 18.513073ms WaitForService to wait for kubelet.
	I0911 12:13:11.656313 2255814 kubeadm.go:581] duration metric: took 4m22.869270451s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:13:11.656342 2255814 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:13:11.660206 2255814 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:13:11.660242 2255814 node_conditions.go:123] node cpu capacity is 2
	I0911 12:13:11.660256 2255814 node_conditions.go:105] duration metric: took 3.907675ms to run NodePressure ...
	I0911 12:13:11.660271 2255814 start.go:228] waiting for startup goroutines ...
	I0911 12:13:11.660281 2255814 start.go:233] waiting for cluster config update ...
	I0911 12:13:11.660295 2255814 start.go:242] writing updated cluster config ...
	I0911 12:13:11.660673 2255814 ssh_runner.go:195] Run: rm -f paused
	I0911 12:13:11.716963 2255814 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:13:11.719502 2255814 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-484027" cluster and "default" namespace by default
	I0911 12:13:08.774412 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:10.776103 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:13.273773 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:15.274785 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:17.776143 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:20.274491 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:22.276115 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:24.776008 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:26.776415 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:29.274644 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:31.774477 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:33.774923 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:35.776441 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:37.777677 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:38.087732 2255048 pod_ready.go:81] duration metric: took 4m0.000743055s waiting for pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace to be "Ready" ...
	E0911 12:13:38.087774 2255048 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:13:38.087805 2255048 pod_ready.go:38] duration metric: took 4m11.950533095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:38.087877 2255048 kubeadm.go:640] restartCluster took 4m32.29342443s
	W0911 12:13:38.087958 2255048 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0911 12:13:38.088001 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0911 12:14:10.169576 2255048 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.081486969s)
	I0911 12:14:10.169706 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:10.189300 2255048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:14:10.202385 2255048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:14:10.213749 2255048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:14:10.213816 2255048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:14:10.279484 2255048 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 12:14:10.279634 2255048 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 12:14:10.462302 2255048 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 12:14:10.462488 2255048 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 12:14:10.462634 2255048 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 12:14:10.659475 2255048 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 12:14:10.661923 2255048 out.go:204]   - Generating certificates and keys ...
	I0911 12:14:10.662086 2255048 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 12:14:10.662142 2255048 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 12:14:10.662223 2255048 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0911 12:14:10.662303 2255048 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0911 12:14:10.663973 2255048 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0911 12:14:10.665836 2255048 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0911 12:14:10.667292 2255048 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0911 12:14:10.668584 2255048 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0911 12:14:10.669931 2255048 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0911 12:14:10.670570 2255048 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0911 12:14:10.671008 2255048 kubeadm.go:322] [certs] Using the existing "sa" key
	I0911 12:14:10.671087 2255048 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 12:14:10.865541 2255048 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 12:14:11.063586 2255048 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 12:14:11.341833 2255048 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 12:14:11.573561 2255048 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 12:14:11.574128 2255048 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 12:14:11.577101 2255048 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 12:14:11.579311 2255048 out.go:204]   - Booting up control plane ...
	I0911 12:14:11.579427 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 12:14:11.579550 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 12:14:11.579644 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 12:14:11.598440 2255048 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 12:14:11.599446 2255048 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 12:14:11.599531 2255048 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 12:14:11.738771 2255048 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 12:14:21.243059 2255048 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503809 seconds
	I0911 12:14:21.243215 2255048 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:14:21.262148 2255048 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:14:21.802567 2255048 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:14:21.802822 2255048 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-352076 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:14:22.320035 2255048 kubeadm.go:322] [bootstrap-token] Using token: 3xtym4.6ytyj76o1n15fsq8
	I0911 12:14:22.321759 2255048 out.go:204]   - Configuring RBAC rules ...
	I0911 12:14:22.321922 2255048 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:14:22.329851 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:14:22.344882 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:14:22.349640 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:14:22.354357 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:14:22.359463 2255048 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:14:22.380068 2255048 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:14:22.713378 2255048 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:14:22.780207 2255048 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:14:22.780252 2255048 kubeadm.go:322] 
	I0911 12:14:22.780331 2255048 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:14:22.780344 2255048 kubeadm.go:322] 
	I0911 12:14:22.780441 2255048 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:14:22.780450 2255048 kubeadm.go:322] 
	I0911 12:14:22.780489 2255048 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:14:22.780568 2255048 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:14:22.780648 2255048 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:14:22.780657 2255048 kubeadm.go:322] 
	I0911 12:14:22.780757 2255048 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:14:22.780791 2255048 kubeadm.go:322] 
	I0911 12:14:22.780876 2255048 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:14:22.780895 2255048 kubeadm.go:322] 
	I0911 12:14:22.780958 2255048 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:14:22.781054 2255048 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:14:22.781157 2255048 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:14:22.781168 2255048 kubeadm.go:322] 
	I0911 12:14:22.781264 2255048 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:14:22.781363 2255048 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:14:22.781374 2255048 kubeadm.go:322] 
	I0911 12:14:22.781490 2255048 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3xtym4.6ytyj76o1n15fsq8 \
	I0911 12:14:22.781618 2255048 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:14:22.781684 2255048 kubeadm.go:322] 	--control-plane 
	I0911 12:14:22.781695 2255048 kubeadm.go:322] 
	I0911 12:14:22.781813 2255048 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:14:22.781830 2255048 kubeadm.go:322] 
	I0911 12:14:22.781956 2255048 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3xtym4.6ytyj76o1n15fsq8 \
	I0911 12:14:22.782107 2255048 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:14:22.783393 2255048 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:14:22.783423 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:14:22.783434 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:14:22.785623 2255048 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:14:22.787278 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:14:22.817914 2255048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:14:22.857165 2255048 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:14:22.857266 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:22.857282 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=no-preload-352076 minikube.k8s.io/updated_at=2023_09_11T12_14_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:23.375677 2255048 ops.go:34] apiserver oom_adj: -16
	I0911 12:14:23.375731 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:23.497980 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:24.128149 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:24.627110 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:25.127658 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:25.627595 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:26.127143 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:26.627803 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:27.128061 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:27.627169 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:28.128081 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:28.628055 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:29.127187 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:29.627707 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:30.127233 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:30.627943 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:31.127222 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:31.627921 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:32.127760 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:32.628112 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:33.128107 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:33.627835 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:34.127171 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:34.627113 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:35.127499 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:35.627255 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:36.127199 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:36.314187 2255048 kubeadm.go:1081] duration metric: took 13.456994708s to wait for elevateKubeSystemPrivileges.
	I0911 12:14:36.314241 2255048 kubeadm.go:406] StartCluster complete in 5m30.569752421s
	I0911 12:14:36.314272 2255048 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:14:36.314446 2255048 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:14:36.317402 2255048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:14:36.317739 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:14:36.318031 2255048 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:14:36.317936 2255048 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:14:36.318110 2255048 addons.go:69] Setting storage-provisioner=true in profile "no-preload-352076"
	I0911 12:14:36.318135 2255048 addons.go:231] Setting addon storage-provisioner=true in "no-preload-352076"
	I0911 12:14:36.318137 2255048 addons.go:69] Setting default-storageclass=true in profile "no-preload-352076"
	I0911 12:14:36.318148 2255048 addons.go:69] Setting metrics-server=true in profile "no-preload-352076"
	I0911 12:14:36.318163 2255048 addons.go:231] Setting addon metrics-server=true in "no-preload-352076"
	I0911 12:14:36.318164 2255048 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-352076"
	W0911 12:14:36.318169 2255048 addons.go:240] addon metrics-server should already be in state true
	I0911 12:14:36.318218 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	W0911 12:14:36.318143 2255048 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:14:36.318318 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	I0911 12:14:36.318696 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318710 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318720 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318723 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.318738 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.318741 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.337905 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0911 12:14:36.338002 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0911 12:14:36.338589 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.338678 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.339313 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.339317 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.339340 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.339363 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.339435 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0911 12:14:36.339903 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.339909 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.339981 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.340160 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.340463 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.340496 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.340588 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.340617 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.341051 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.341512 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.341540 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.359712 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0911 12:14:36.360342 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.360914 2255048 addons.go:231] Setting addon default-storageclass=true in "no-preload-352076"
	W0911 12:14:36.360941 2255048 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:14:36.360969 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	I0911 12:14:36.360969 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.360984 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.361238 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.361271 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.361350 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.361540 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.362624 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0911 12:14:36.363381 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.363731 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.364093 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.364114 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.366385 2255048 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:14:36.364716 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.368526 2255048 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:14:36.368557 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:14:36.368640 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.368799 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.371211 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.374123 2255048 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:14:36.373727 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.374507 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.376914 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.376951 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.376846 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:14:36.376970 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:14:36.376991 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.377194 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.377424 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.377656 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.380757 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.381482 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.381508 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.381537 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.381783 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.381953 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.382098 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.383003 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42733
	I0911 12:14:36.383415 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.383860 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.383884 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.384174 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.384600 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.384650 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.401421 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
	I0911 12:14:36.401987 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.402660 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.402684 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.403172 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.403456 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.406003 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.406531 2255048 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:14:36.406567 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:14:36.406593 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.410520 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.411016 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.411072 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.411331 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.411517 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.411723 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.411895 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.448234 2255048 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-352076" context rescaled to 1 replicas
	I0911 12:14:36.448281 2255048 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:14:36.450615 2255048 out.go:177] * Verifying Kubernetes components...
	I0911 12:14:36.452566 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:36.600188 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 12:14:36.600187 2255048 node_ready.go:35] waiting up to 6m0s for node "no-preload-352076" to be "Ready" ...
	I0911 12:14:36.611125 2255048 node_ready.go:49] node "no-preload-352076" has status "Ready":"True"
	I0911 12:14:36.611167 2255048 node_ready.go:38] duration metric: took 10.942009ms waiting for node "no-preload-352076" to be "Ready" ...
	I0911 12:14:36.611181 2255048 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:14:36.632729 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:14:36.632759 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:14:36.640639 2255048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:36.656421 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:14:36.659146 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:14:36.711603 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:14:36.711644 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:14:36.780574 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:14:36.780614 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:14:36.874964 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:14:38.569895 2255048 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.969647165s)
	I0911 12:14:38.569949 2255048 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0911 12:14:38.569895 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.91343277s)
	I0911 12:14:38.570001 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570017 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.570428 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.570469 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.570484 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570440 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.570495 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.570786 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.570801 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.570803 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.570820 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570830 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.571133 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.571183 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.571196 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.756212 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:14:39.258501 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.599303563s)
	I0911 12:14:39.258567 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.258581 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.258631 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.383622497s)
	I0911 12:14:39.258693 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.258713 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259000 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259069 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259129 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259139 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.259040 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259150 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259154 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259165 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.259178 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259042 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259444 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259444 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259468 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259514 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259605 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259620 2255048 addons.go:467] Verifying addon metrics-server=true in "no-preload-352076"
	I0911 12:14:39.261573 2255048 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0911 12:14:39.263513 2255048 addons.go:502] enable addons completed in 2.945573816s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0911 12:14:41.194698 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:14:41.682872 2255048 pod_ready.go:92] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.682904 2255048 pod_ready.go:81] duration metric: took 5.042231142s waiting for pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.682919 2255048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.685265 2255048 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ddttk" not found
	I0911 12:14:41.685295 2255048 pod_ready.go:81] duration metric: took 2.370305ms waiting for pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace to be "Ready" ...
	E0911 12:14:41.685306 2255048 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ddttk" not found
	I0911 12:14:41.685313 2255048 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.694255 2255048 pod_ready.go:92] pod "etcd-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.694295 2255048 pod_ready.go:81] duration metric: took 8.974837ms waiting for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.694309 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.700807 2255048 pod_ready.go:92] pod "kube-apiserver-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.700854 2255048 pod_ready.go:81] duration metric: took 6.536644ms waiting for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.700869 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.707895 2255048 pod_ready.go:92] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.707918 2255048 pod_ready.go:81] duration metric: took 7.041207ms waiting for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.707930 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5w2x" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.880293 2255048 pod_ready.go:92] pod "kube-proxy-f5w2x" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.880329 2255048 pod_ready.go:81] duration metric: took 172.39121ms waiting for pod "kube-proxy-f5w2x" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.880345 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:42.280038 2255048 pod_ready.go:92] pod "kube-scheduler-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:42.280066 2255048 pod_ready.go:81] duration metric: took 399.713688ms waiting for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:42.280074 2255048 pod_ready.go:38] duration metric: took 5.668879257s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:14:42.280093 2255048 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:14:42.280143 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:14:42.303868 2255048 api_server.go:72] duration metric: took 5.855535753s to wait for apiserver process to appear ...
	I0911 12:14:42.303906 2255048 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:14:42.303927 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:14:42.310890 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
	ok
	I0911 12:14:42.313428 2255048 api_server.go:141] control plane version: v1.28.1
	I0911 12:14:42.313455 2255048 api_server.go:131] duration metric: took 9.541682ms to wait for apiserver health ...
	I0911 12:14:42.313464 2255048 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:14:42.483863 2255048 system_pods.go:59] 8 kube-system pods found
	I0911 12:14:42.483895 2255048 system_pods.go:61] "coredns-5dd5756b68-6w2w7" [fe585a8f-a92f-4497-b399-d759c995f9e6] Running
	I0911 12:14:42.483900 2255048 system_pods.go:61] "etcd-no-preload-352076" [46c6483c-1301-4aa4-9851-b66116ac5b8a] Running
	I0911 12:14:42.483905 2255048 system_pods.go:61] "kube-apiserver-no-preload-352076" [a01ff260-818b-46a3-99c2-c5e4f8fa2610] Running
	I0911 12:14:42.483909 2255048 system_pods.go:61] "kube-controller-manager-no-preload-352076" [0f697ec3-cabf-41f8-bdb0-8b39a70deafd] Running
	I0911 12:14:42.483912 2255048 system_pods.go:61] "kube-proxy-f5w2x" [03e8a2b5-aaf8-4fd7-920e-033a44729398] Running
	I0911 12:14:42.483916 2255048 system_pods.go:61] "kube-scheduler-no-preload-352076" [ddbb31d1-9bc6-4466-bd7b-81f433959677] Running
	I0911 12:14:42.483923 2255048 system_pods.go:61] "metrics-server-57f55c9bc5-r8mgg" [a54edaa0-b800-48f3-99bc-7d38adb834d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:14:42.483930 2255048 system_pods.go:61] "storage-provisioner" [c5d1acfb-fa11-4a73-9176-21aee3e2ab99] Running
	I0911 12:14:42.483936 2255048 system_pods.go:74] duration metric: took 170.467243ms to wait for pod list to return data ...
	I0911 12:14:42.483945 2255048 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:14:42.679235 2255048 default_sa.go:45] found service account: "default"
	I0911 12:14:42.679270 2255048 default_sa.go:55] duration metric: took 195.319105ms for default service account to be created ...
	I0911 12:14:42.679284 2255048 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:14:42.883048 2255048 system_pods.go:86] 8 kube-system pods found
	I0911 12:14:42.883078 2255048 system_pods.go:89] "coredns-5dd5756b68-6w2w7" [fe585a8f-a92f-4497-b399-d759c995f9e6] Running
	I0911 12:14:42.883084 2255048 system_pods.go:89] "etcd-no-preload-352076" [46c6483c-1301-4aa4-9851-b66116ac5b8a] Running
	I0911 12:14:42.883089 2255048 system_pods.go:89] "kube-apiserver-no-preload-352076" [a01ff260-818b-46a3-99c2-c5e4f8fa2610] Running
	I0911 12:14:42.883093 2255048 system_pods.go:89] "kube-controller-manager-no-preload-352076" [0f697ec3-cabf-41f8-bdb0-8b39a70deafd] Running
	I0911 12:14:42.883097 2255048 system_pods.go:89] "kube-proxy-f5w2x" [03e8a2b5-aaf8-4fd7-920e-033a44729398] Running
	I0911 12:14:42.883103 2255048 system_pods.go:89] "kube-scheduler-no-preload-352076" [ddbb31d1-9bc6-4466-bd7b-81f433959677] Running
	I0911 12:14:42.883110 2255048 system_pods.go:89] "metrics-server-57f55c9bc5-r8mgg" [a54edaa0-b800-48f3-99bc-7d38adb834d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:14:42.883118 2255048 system_pods.go:89] "storage-provisioner" [c5d1acfb-fa11-4a73-9176-21aee3e2ab99] Running
	I0911 12:14:42.883126 2255048 system_pods.go:126] duration metric: took 203.835523ms to wait for k8s-apps to be running ...
	I0911 12:14:42.883133 2255048 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:14:42.883181 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:42.897962 2255048 system_svc.go:56] duration metric: took 14.812893ms WaitForService to wait for kubelet.
	I0911 12:14:42.898000 2255048 kubeadm.go:581] duration metric: took 6.449678905s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:14:42.898022 2255048 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:14:43.080859 2255048 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:14:43.080890 2255048 node_conditions.go:123] node cpu capacity is 2
	I0911 12:14:43.080901 2255048 node_conditions.go:105] duration metric: took 182.874167ms to run NodePressure ...
	I0911 12:14:43.080913 2255048 start.go:228] waiting for startup goroutines ...
	I0911 12:14:43.080919 2255048 start.go:233] waiting for cluster config update ...
	I0911 12:14:43.080930 2255048 start.go:242] writing updated cluster config ...
	I0911 12:14:43.081223 2255048 ssh_runner.go:195] Run: rm -f paused
	I0911 12:14:43.135636 2255048 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:14:43.137835 2255048 out.go:177] * Done! kubectl is now configured to use "no-preload-352076" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 12:08:09 UTC, ends at Mon 2023-09-11 12:22:13 UTC. --
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.288372621Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&PodSandboxMetadata{Name:busybox,Uid:1a6325f3-c610-437f-a81f-36da95fc4ebf,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434129871178438,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:08:45.938426303Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-xszs4,Uid:e58151f1-7503-49df-b847-67ac70d0ef74,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:169443
4129845079500,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:08:45.938427642Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b81b2c3cec4acba0a2b49eccf1ea3bf0972e3301d4c2b63fe9f9d1c983d3151a,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-tw6td,Uid:37d0a828-9243-4359-be39-1c2099835e45,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434128262777730,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-tw6td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d0a828-9243-4359-be39-1c2099835e45,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11
T12:08:45.938424481Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&PodSandboxMetadata{Name:kube-proxy-ldgjr,Uid:34e5049f-8cba-49bf-96af-f5e0338e4aa5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434126310003472,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34e5049f-8cba-49bf-96af-f5e0338e4aa5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:08:45.938421157Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:deb073a7-107f-419d-9b5e-16c7722b957d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434126283435418,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2023-09-11T12:08:45.938409906Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-484027,Uid:483e3b587026f25bcbe9b42b4b588cca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434118495200712,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.230:8444,kubernetes.io/config.hash: 483e3b587026f25bcbe9b42b4b588cca,kubernetes.io/config.seen: 2023-09-11T12:08:37.925808333Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86
a3,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-484027,Uid:905d42441501c2e6979afd6df9e96a0e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434118487150546,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.230:2379,kubernetes.io/config.hash: 905d42441501c2e6979afd6df9e96a0e,kubernetes.io/config.seen: 2023-09-11T12:08:37.925807249Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-484027,Uid:8f43ce84a1b0e0279a12b1137f2ed4cd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434118483133068,Labels:map[s
tring]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f43ce84a1b0e0279a12b1137f2ed4cd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8f43ce84a1b0e0279a12b1137f2ed4cd,kubernetes.io/config.seen: 2023-09-11T12:08:37.925802150Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-484027,Uid:415a16c4d6051dd25329d839e8bc8363,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434118479250864,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: 415a16c4d6051dd25329d839e8bc8363,kubernetes.io/config.seen: 2023-09-11T12:08:37.925806130Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=59468122-83da-4acb-b402-3a4c99be8026 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.289192778Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6df6a552-17d7-4d5b-88ec-c6696949df4d name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.289271939Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6df6a552-17d7-4d5b-88ec-c6696949df4d name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.289488661Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6df6a552-17d7-4d5b-88ec-c6696949df4d name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.303712445Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=994b07b2-29cd-4cb7-a25f-89c9cdc2fb3b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.303822523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=994b07b2-29cd-4cb7-a25f-89c9cdc2fb3b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.304232482Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=994b07b2-29cd-4cb7-a25f-89c9cdc2fb3b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.345095727Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=080cb2c2-5705-4856-9ccd-e22134ab05f4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.345263884Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=080cb2c2-5705-4856-9ccd-e22134ab05f4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.345712294Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=080cb2c2-5705-4856-9ccd-e22134ab05f4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.390836034Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=a8c29c61-ab4c-489b-92e6-1ec28ba4f5c4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.391023469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=a8c29c61-ab4c-489b-92e6-1ec28ba4f5c4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.391292288Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=a8c29c61-ab4c-489b-92e6-1ec28ba4f5c4 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.429990631Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1bbeb4fa-7839-4b19-95b0-3b96f9a5bddf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.430063956Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1bbeb4fa-7839-4b19-95b0-3b96f9a5bddf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.430287749Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1bbeb4fa-7839-4b19-95b0-3b96f9a5bddf name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.466444047Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0c1a9e2c-9f0f-44ff-b1ed-ebb1dfe63820 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.466537772Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0c1a9e2c-9f0f-44ff-b1ed-ebb1dfe63820 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.466813241Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0c1a9e2c-9f0f-44ff-b1ed-ebb1dfe63820 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.511773088Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b8052f00-c9e8-43d6-92ca-a60a276996eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.511863873Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b8052f00-c9e8-43d6-92ca-a60a276996eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.512165727Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b8052f00-c9e8-43d6-92ca-a60a276996eb name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.544679625Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b491c84e-1744-4b06-a914-ce4644c59b4a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.544745335Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b491c84e-1744-4b06-a914-ce4644c59b4a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:22:13 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:22:13.545002503Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b491c84e-1744-4b06-a914-ce4644c59b4a name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	8cc82bfb8abe6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   951133aad1b41
	f44e8458b48fa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   ef1063b26e24d
	8e75cc646ed39       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago      Running             coredns                   1                   5079d932dd6dd
	08777e80449f2       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      13 minutes ago      Running             kube-proxy                1                   2904254cb089b
	f5464e92c81e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   951133aad1b41
	153e729fe2650       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago      Running             etcd                      1                   c686329de6a1f
	fc4e7b5d1258c       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      13 minutes ago      Running             kube-scheduler            1                   9e85532741745
	07023f1836d74       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      13 minutes ago      Running             kube-apiserver            1                   2ef2df8aa1112
	169c262446f69       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      13 minutes ago      Running             kube-controller-manager   1                   020cab58e2701
	
	* 
	* ==> coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36512 - 31348 "HINFO IN 324436852554161395.8800393712480390138. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010621908s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-484027
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-484027
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=default-k8s-diff-port-484027
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T12_02_00_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 12:01:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-484027
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 12:22:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 12:19:29 +0000   Mon, 11 Sep 2023 12:01:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 12:19:29 +0000   Mon, 11 Sep 2023 12:01:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 12:19:29 +0000   Mon, 11 Sep 2023 12:01:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 12:19:29 +0000   Mon, 11 Sep 2023 12:08:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    default-k8s-diff-port-484027
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 05a3bd74df704341a61f28302ea21153
	  System UUID:                05a3bd74-df70-4341-a61f-28302ea21153
	  Boot ID:                    264a2c7a-c929-45ee-9e5a-5cf8e0d6a579
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	  kube-system                 coredns-5dd5756b68-xszs4                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     20m
	  kube-system                 etcd-default-k8s-diff-port-484027                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         20m
	  kube-system                 kube-apiserver-default-k8s-diff-port-484027             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-484027    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-proxy-ldgjr                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 kube-scheduler-default-k8s-diff-port-484027             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         20m
	  kube-system                 metrics-server-57f55c9bc5-tw6td                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         19m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeHasSufficientPID     20m (x7 over 20m)  kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  20m (x8 over 20m)  kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 20m                kubelet          Starting kubelet.
	  Normal  NodeReady                20m                kubelet          Node default-k8s-diff-port-484027 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    20m                kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     20m                kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  20m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  20m                kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           20m                node-controller  Node default-k8s-diff-port-484027 event: Registered Node default-k8s-diff-port-484027 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m                node-controller  Node default-k8s-diff-port-484027 event: Registered Node default-k8s-diff-port-484027 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep11 12:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000004] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.102554] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.992736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.888880] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.155704] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.571362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.899325] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.137666] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.178357] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.135622] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.286939] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +18.091757] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[ +20.584498] kauditd_printk_skb: 29 callbacks suppressed
	
	* 
	* ==> etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] <==
	* {"level":"info","ts":"2023-09-11T12:08:43.782138Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became pre-candidate at term 2"}
	{"level":"info","ts":"2023-09-11T12:08:43.782205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgPreVoteResp from f4acae94ef986412 at term 2"}
	{"level":"info","ts":"2023-09-11T12:08:43.782246Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became candidate at term 3"}
	{"level":"info","ts":"2023-09-11T12:08:43.782277Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 received MsgVoteResp from f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2023-09-11T12:08:43.782314Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"f4acae94ef986412 became leader at term 3"}
	{"level":"info","ts":"2023-09-11T12:08:43.782347Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: f4acae94ef986412 elected leader f4acae94ef986412 at term 3"}
	{"level":"info","ts":"2023-09-11T12:08:43.785475Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T12:08:43.785574Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T12:08:43.785626Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T12:08:43.786108Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"f4acae94ef986412","local-member-attributes":"{Name:default-k8s-diff-port-484027 ClientURLs:[https://192.168.39.230:2379]}","request-path":"/0/members/f4acae94ef986412/attributes","cluster-id":"b0aea99135fe63d","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T12:08:43.786194Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T12:08:43.787794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.230:2379"}
	{"level":"info","ts":"2023-09-11T12:08:43.787844Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T12:08:48.190696Z","caller":"traceutil/trace.go:171","msg":"trace[2058819301] transaction","detail":"{read_only:false; response_revision:536; number_of_response:1; }","duration":"105.866189ms","start":"2023-09-11T12:08:48.084816Z","end":"2023-09-11T12:08:48.190683Z","steps":["trace[2058819301] 'process raft request'  (duration: 105.468133ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T12:08:56.649336Z","caller":"traceutil/trace.go:171","msg":"trace[1530011645] linearizableReadLoop","detail":"{readStateIndex:590; appliedIndex:589; }","duration":"361.470679ms","start":"2023-09-11T12:08:56.287844Z","end":"2023-09-11T12:08:56.649315Z","steps":["trace[1530011645] 'read index received'  (duration: 360.216054ms)","trace[1530011645] 'applied index is now lower than readState.Index'  (duration: 1.254078ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-11T12:08:56.649498Z","caller":"traceutil/trace.go:171","msg":"trace[780628355] transaction","detail":"{read_only:false; response_revision:559; number_of_response:1; }","duration":"641.659572ms","start":"2023-09-11T12:08:56.00783Z","end":"2023-09-11T12:08:56.64949Z","steps":["trace[780628355] 'process raft request'  (duration: 640.331981ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T12:08:56.650303Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T12:08:56.007813Z","time spent":"641.725254ms","remote":"127.0.0.1:55882","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":5522,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/minions/default-k8s-diff-port-484027\" mod_revision:471 > success:<request_put:<key:\"/registry/minions/default-k8s-diff-port-484027\" value_size:5468 >> failure:<request_range:<key:\"/registry/minions/default-k8s-diff-port-484027\" > >"}
	{"level":"warn","ts":"2023-09-11T12:08:56.65102Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"363.181207ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/default-k8s-diff-port-484027\" ","response":"range_response_count:1 size:5536"}
	{"level":"info","ts":"2023-09-11T12:08:56.651107Z","caller":"traceutil/trace.go:171","msg":"trace[780737648] range","detail":"{range_begin:/registry/minions/default-k8s-diff-port-484027; range_end:; response_count:1; response_revision:559; }","duration":"363.274541ms","start":"2023-09-11T12:08:56.287824Z","end":"2023-09-11T12:08:56.651099Z","steps":["trace[780737648] 'agreement among raft nodes before linearized reading'  (duration: 363.093786ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T12:08:56.651153Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T12:08:56.287811Z","time spent":"363.33475ms","remote":"127.0.0.1:55882","response type":"/etcdserverpb.KV/Range","request count":0,"request size":48,"response count":1,"response size":5559,"request content":"key:\"/registry/minions/default-k8s-diff-port-484027\" "}
	{"level":"warn","ts":"2023-09-11T12:08:56.651461Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"168.289021ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-484027\" ","response":"range_response_count:1 size:4346"}
	{"level":"info","ts":"2023-09-11T12:08:56.651613Z","caller":"traceutil/trace.go:171","msg":"trace[63027336] range","detail":"{range_begin:/registry/pods/kube-system/kube-scheduler-default-k8s-diff-port-484027; range_end:; response_count:1; response_revision:559; }","duration":"170.331098ms","start":"2023-09-11T12:08:56.481271Z","end":"2023-09-11T12:08:56.651602Z","steps":["trace[63027336] 'agreement among raft nodes before linearized reading'  (duration: 168.257279ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T12:18:43.832615Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":816}
	{"level":"info","ts":"2023-09-11T12:18:43.835641Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":816,"took":"2.528318ms","hash":2367774811}
	{"level":"info","ts":"2023-09-11T12:18:43.835685Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2367774811,"revision":816,"compact-revision":-1}
	
	* 
	* ==> kernel <==
	*  12:22:13 up 14 min,  0 users,  load average: 0.18, 0.30, 0.24
	Linux default-k8s-diff-port-484027 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] <==
	* I0911 12:18:46.633854       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 12:18:46.633917       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:18:46.634142       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:18:46.635315       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:19:45.490647       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.87.53:443: connect: connection refused
	I0911 12:19:45.491074       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:19:46.634757       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:19:46.634997       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:19:46.635037       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 12:19:46.636117       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:19:46.636255       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:19:46.636285       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:20:45.490194       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.87.53:443: connect: connection refused
	I0911 12:20:45.490244       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0911 12:21:45.490628       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.87.53:443: connect: connection refused
	I0911 12:21:45.491034       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:21:46.635715       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:21:46.635849       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:21:46.635861       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 12:21:46.637140       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:21:46.637298       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:21:46.637309       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] <==
	* I0911 12:16:28.805747       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:16:58.262488       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:16:58.815817       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:17:28.271765       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:17:28.826542       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:17:58.277740       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:17:58.836795       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:18:28.285344       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:18:28.848144       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:18:58.293703       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:18:58.857595       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:19:28.300869       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:19:28.866693       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:19:58.310184       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:19:58.878350       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0911 12:20:09.022734       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="500.134µs"
	I0911 12:20:24.025368       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="197.008µs"
	E0911 12:20:28.316776       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:20:28.887699       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:20:58.327509       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:20:58.902616       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:21:28.334612       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:21:28.912261       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:21:58.340775       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:21:58.922485       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] <==
	* I0911 12:08:47.790276       1 server_others.go:69] "Using iptables proxy"
	I0911 12:08:47.805450       1 node.go:141] Successfully retrieved node IP: 192.168.39.230
	I0911 12:08:47.870766       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 12:08:47.871077       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 12:08:47.874885       1 server_others.go:152] "Using iptables Proxier"
	I0911 12:08:47.875109       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 12:08:47.875510       1 server.go:846] "Version info" version="v1.28.1"
	I0911 12:08:47.875557       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 12:08:47.877057       1 config.go:188] "Starting service config controller"
	I0911 12:08:47.877121       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 12:08:47.877160       1 config.go:97] "Starting endpoint slice config controller"
	I0911 12:08:47.877198       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 12:08:47.878161       1 config.go:315] "Starting node config controller"
	I0911 12:08:47.878211       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 12:08:47.978314       1 shared_informer.go:318] Caches are synced for node config
	I0911 12:08:47.978374       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 12:08:47.978514       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] <==
	* I0911 12:08:42.292225       1 serving.go:348] Generated self-signed cert in-memory
	W0911 12:08:45.579514       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 12:08:45.579603       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 12:08:45.579618       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 12:08:45.579625       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 12:08:45.640777       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0911 12:08:45.640825       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 12:08:45.647480       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0911 12:08:45.647673       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0911 12:08:45.647693       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 12:08:45.647713       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 12:08:45.748207       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 12:08:09 UTC, ends at Mon 2023-09-11 12:22:14 UTC. --
	Sep 11 12:19:38 default-k8s-diff-port-484027 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:19:38 default-k8s-diff-port-484027 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:19:42 default-k8s-diff-port-484027 kubelet[923]: E0911 12:19:42.004441     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:19:56 default-k8s-diff-port-484027 kubelet[923]: E0911 12:19:56.021264     923 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 11 12:19:56 default-k8s-diff-port-484027 kubelet[923]: E0911 12:19:56.021417     923 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 11 12:19:56 default-k8s-diff-port-484027 kubelet[923]: E0911 12:19:56.021738     923 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vb979,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-tw6td_kube-system(37d0a828-9243-4359-be39-1c2099835e45): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 11 12:19:56 default-k8s-diff-port-484027 kubelet[923]: E0911 12:19:56.021797     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:20:09 default-k8s-diff-port-484027 kubelet[923]: E0911 12:20:09.000398     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:20:24 default-k8s-diff-port-484027 kubelet[923]: E0911 12:20:24.000856     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:20:38 default-k8s-diff-port-484027 kubelet[923]: E0911 12:20:38.018367     923 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:20:38 default-k8s-diff-port-484027 kubelet[923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:20:38 default-k8s-diff-port-484027 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:20:38 default-k8s-diff-port-484027 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:20:39 default-k8s-diff-port-484027 kubelet[923]: E0911 12:20:39.000221     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:20:53 default-k8s-diff-port-484027 kubelet[923]: E0911 12:20:53.000222     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:21:07 default-k8s-diff-port-484027 kubelet[923]: E0911 12:21:07.000132     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:21:19 default-k8s-diff-port-484027 kubelet[923]: E0911 12:21:19.000297     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:21:33 default-k8s-diff-port-484027 kubelet[923]: E0911 12:21:33.000606     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:21:38 default-k8s-diff-port-484027 kubelet[923]: E0911 12:21:38.021483     923 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:21:38 default-k8s-diff-port-484027 kubelet[923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:21:38 default-k8s-diff-port-484027 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:21:38 default-k8s-diff-port-484027 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:21:46 default-k8s-diff-port-484027 kubelet[923]: E0911 12:21:46.000481     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:21:58 default-k8s-diff-port-484027 kubelet[923]: E0911 12:21:58.000563     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:22:09 default-k8s-diff-port-484027 kubelet[923]: E0911 12:22:08.999734     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	
	* 
	* ==> storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] <==
	* I0911 12:09:18.433186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 12:09:18.456298       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 12:09:18.459726       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 12:09:35.869781       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 12:09:35.871418       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-484027_c66fce9b-b030-4618-8623-40f52941e58b!
	I0911 12:09:35.870351       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"314be2a5-1789-42e0-a9e6-b1e42a2502da", APIVersion:"v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-484027_c66fce9b-b030-4618-8623-40f52941e58b became leader
	I0911 12:09:35.971811       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-484027_c66fce9b-b030-4618-8623-40f52941e58b!
	
	* 
	* ==> storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] <==
	* I0911 12:08:47.516453       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0911 12:09:17.521878       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-484027 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-tw6td
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-484027 describe pod metrics-server-57f55c9bc5-tw6td
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-484027 describe pod metrics-server-57f55c9bc5-tw6td: exit status 1 (75.903997ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-tw6td" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-484027 describe pod metrics-server-57f55c9bc5-tw6td: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (543.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.29s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0911 12:15:38.106737 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 12:16:22.842839 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-352076 -n no-preload-352076
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2023-09-11 12:23:43.737773706 +0000 UTC m=+5229.050398622
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-352076 -n no-preload-352076
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-352076 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-352076 logs -n 25: (1.626564614s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 11:57 UTC | 11 Sep 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| ssh     | cert-options-559775 ssh                                | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-559775 -- sudo                         | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559775                                 | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-352076             | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 11:59 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-235462            | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642215        | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:01 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-226537 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	|         | disable-driver-mounts-226537                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:02 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-352076                  | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-484027  | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-235462                 | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642215             | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-484027       | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC | 11 Sep 23 12:13 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 12:04:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 12:04:58.034724 2255814 out.go:296] Setting OutFile to fd 1 ...
	I0911 12:04:58.034920 2255814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:04:58.034929 2255814 out.go:309] Setting ErrFile to fd 2...
	I0911 12:04:58.034933 2255814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:04:58.035102 2255814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 12:04:58.035709 2255814 out.go:303] Setting JSON to false
	I0911 12:04:58.036651 2255814 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":236849,"bootTime":1694197049,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 12:04:58.036727 2255814 start.go:138] virtualization: kvm guest
	I0911 12:04:58.039239 2255814 out.go:177] * [default-k8s-diff-port-484027] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 12:04:58.041110 2255814 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 12:04:58.041181 2255814 notify.go:220] Checking for updates...
	I0911 12:04:58.042795 2255814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 12:04:58.044550 2255814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:04:58.046047 2255814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 12:04:58.047718 2255814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 12:04:58.049343 2255814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 12:04:58.051545 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:04:58.051956 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:04:58.052047 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:04:58.068212 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0911 12:04:58.068649 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:04:58.069289 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:04:58.069318 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:04:58.069763 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:04:58.069987 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:04:58.070276 2255814 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 12:04:58.070629 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:04:58.070670 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:04:58.085941 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0911 12:04:58.086461 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:04:58.086966 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:04:58.086995 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:04:58.087337 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:04:58.087522 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:04:58.126206 2255814 out.go:177] * Using the kvm2 driver based on existing profile
	I0911 12:04:58.127558 2255814 start.go:298] selected driver: kvm2
	I0911 12:04:58.127571 2255814 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:04:58.127716 2255814 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 12:04:58.128430 2255814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:04:58.128555 2255814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 12:04:58.144660 2255814 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 12:04:58.145091 2255814 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 12:04:58.145145 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:04:58.145159 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:04:58.145176 2255814 start_flags.go:321] config:
	{Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-48402
7 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:04:58.145377 2255814 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:04:58.147634 2255814 out.go:177] * Starting control plane node default-k8s-diff-port-484027 in cluster default-k8s-diff-port-484027
	I0911 12:04:56.741109 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:04:58.149438 2255814 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:04:58.149510 2255814 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 12:04:58.149543 2255814 cache.go:57] Caching tarball of preloaded images
	I0911 12:04:58.149650 2255814 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 12:04:58.149664 2255814 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 12:04:58.149825 2255814 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/config.json ...
	I0911 12:04:58.150070 2255814 start.go:365] acquiring machines lock for default-k8s-diff-port-484027: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:04:59.813165 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:05.893188 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:08.965171 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:15.045168 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:18.117188 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:24.197148 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:27.269089 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:33.349151 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:36.421191 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:42.501129 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:45.573209 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:51.653159 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:54.725153 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:00.805201 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:03.877105 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:09.957136 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:13.029119 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:19.109157 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:22.181096 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:28.261156 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:31.333179 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:37.413187 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:40.485239 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:46.565193 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:49.637182 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:55.717194 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:58.789181 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:04.869137 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:07.941096 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:10.946790 2255187 start.go:369] acquired machines lock for "embed-certs-235462" in 4m28.227506413s
	I0911 12:07:10.946859 2255187 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:10.946884 2255187 fix.go:54] fixHost starting: 
	I0911 12:07:10.947279 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:10.947318 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:10.963823 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43713
	I0911 12:07:10.964352 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:10.965050 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:07:10.965086 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:10.965556 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:10.965804 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:10.965995 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:07:10.967759 2255187 fix.go:102] recreateIfNeeded on embed-certs-235462: state=Stopped err=<nil>
	I0911 12:07:10.967790 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	W0911 12:07:10.968000 2255187 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:10.970103 2255187 out.go:177] * Restarting existing kvm2 VM for "embed-certs-235462" ...
	I0911 12:07:10.971879 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Start
	I0911 12:07:10.972130 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring networks are active...
	I0911 12:07:10.973084 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring network default is active
	I0911 12:07:10.973424 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring network mk-embed-certs-235462 is active
	I0911 12:07:10.973888 2255187 main.go:141] libmachine: (embed-certs-235462) Getting domain xml...
	I0911 12:07:10.974726 2255187 main.go:141] libmachine: (embed-certs-235462) Creating domain...
	I0911 12:07:12.246736 2255187 main.go:141] libmachine: (embed-certs-235462) Waiting to get IP...
	I0911 12:07:12.247648 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.248019 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.248152 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.248016 2256167 retry.go:31] will retry after 245.040457ms: waiting for machine to come up
	I0911 12:07:12.494788 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.495311 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.495345 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.495247 2256167 retry.go:31] will retry after 312.634812ms: waiting for machine to come up
	I0911 12:07:10.943345 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:10.943403 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:07:10.946565 2255048 machine.go:91] provisioned docker machine in 4m37.405921901s
	I0911 12:07:10.946641 2255048 fix.go:56] fixHost completed within 4m37.430192233s
	I0911 12:07:10.946648 2255048 start.go:83] releasing machines lock for "no-preload-352076", held for 4m37.430236677s
	W0911 12:07:10.946673 2255048 start.go:672] error starting host: provision: host is not running
	W0911 12:07:10.946819 2255048 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0911 12:07:10.946833 2255048 start.go:687] Will try again in 5 seconds ...
	I0911 12:07:12.810038 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.810461 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.810496 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.810398 2256167 retry.go:31] will retry after 478.036066ms: waiting for machine to come up
	I0911 12:07:13.290252 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:13.290701 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:13.290731 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:13.290646 2256167 retry.go:31] will retry after 576.124591ms: waiting for machine to come up
	I0911 12:07:13.868555 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:13.868978 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:13.869004 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:13.868931 2256167 retry.go:31] will retry after 487.107859ms: waiting for machine to come up
	I0911 12:07:14.357765 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:14.358240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:14.358315 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:14.358173 2256167 retry.go:31] will retry after 903.857312ms: waiting for machine to come up
	I0911 12:07:15.263350 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:15.263852 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:15.263908 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:15.263777 2256167 retry.go:31] will retry after 830.555039ms: waiting for machine to come up
	I0911 12:07:16.096337 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:16.096743 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:16.096774 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:16.096696 2256167 retry.go:31] will retry after 1.307188723s: waiting for machine to come up
	I0911 12:07:17.406129 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:17.406558 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:17.406584 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:17.406512 2256167 retry.go:31] will retry after 1.681887732s: waiting for machine to come up
	I0911 12:07:15.947503 2255048 start.go:365] acquiring machines lock for no-preload-352076: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:07:19.090590 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:19.091013 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:19.091038 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:19.090965 2256167 retry.go:31] will retry after 2.013298988s: waiting for machine to come up
	I0911 12:07:21.105851 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:21.106384 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:21.106418 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:21.106319 2256167 retry.go:31] will retry after 2.714578164s: waiting for machine to come up
	I0911 12:07:23.823181 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:23.823687 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:23.823772 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:23.823623 2256167 retry.go:31] will retry after 2.321779277s: waiting for machine to come up
	I0911 12:07:26.147527 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:26.147956 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:26.147986 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:26.147896 2256167 retry.go:31] will retry after 4.307300197s: waiting for machine to come up
	I0911 12:07:31.786165 2255304 start.go:369] acquired machines lock for "old-k8s-version-642215" in 4m38.564304718s
	I0911 12:07:31.786239 2255304 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:31.786261 2255304 fix.go:54] fixHost starting: 
	I0911 12:07:31.786754 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:31.786809 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:31.806853 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0911 12:07:31.807320 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:31.807871 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:07:31.807906 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:31.808284 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:31.808473 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:31.808622 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:07:31.810311 2255304 fix.go:102] recreateIfNeeded on old-k8s-version-642215: state=Stopped err=<nil>
	I0911 12:07:31.810345 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	W0911 12:07:31.810524 2255304 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:31.813302 2255304 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-642215" ...
	I0911 12:07:30.458075 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.458554 2255187 main.go:141] libmachine: (embed-certs-235462) Found IP for machine: 192.168.50.96
	I0911 12:07:30.458579 2255187 main.go:141] libmachine: (embed-certs-235462) Reserving static IP address...
	I0911 12:07:30.458593 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has current primary IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.459036 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "embed-certs-235462", mac: "52:54:00:2b:a0:6e", ip: "192.168.50.96"} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.459066 2255187 main.go:141] libmachine: (embed-certs-235462) Reserved static IP address: 192.168.50.96
	I0911 12:07:30.459088 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | skip adding static IP to network mk-embed-certs-235462 - found existing host DHCP lease matching {name: "embed-certs-235462", mac: "52:54:00:2b:a0:6e", ip: "192.168.50.96"}
	I0911 12:07:30.459104 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Getting to WaitForSSH function...
	I0911 12:07:30.459117 2255187 main.go:141] libmachine: (embed-certs-235462) Waiting for SSH to be available...
	I0911 12:07:30.461594 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.461938 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.461979 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.462087 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Using SSH client type: external
	I0911 12:07:30.462109 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa (-rw-------)
	I0911 12:07:30.462146 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:07:30.462165 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | About to run SSH command:
	I0911 12:07:30.462200 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | exit 0
	I0911 12:07:30.556976 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | SSH cmd err, output: <nil>: 
	I0911 12:07:30.557370 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetConfigRaw
	I0911 12:07:30.558054 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:30.560898 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.561254 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.561292 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.561638 2255187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/config.json ...
	I0911 12:07:30.561863 2255187 machine.go:88] provisioning docker machine ...
	I0911 12:07:30.561885 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:30.562128 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.562296 2255187 buildroot.go:166] provisioning hostname "embed-certs-235462"
	I0911 12:07:30.562315 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.562497 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.565095 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.565484 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.565519 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.565682 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:30.565852 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.566021 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.566126 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:30.566273 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:30.566796 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:30.566814 2255187 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-235462 && echo "embed-certs-235462" | sudo tee /etc/hostname
	I0911 12:07:30.706262 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-235462
	
	I0911 12:07:30.706294 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.709499 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.709822 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.709862 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.710067 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:30.710331 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.710598 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.710762 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:30.710986 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:30.711479 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:30.711503 2255187 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-235462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-235462/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-235462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:07:30.850084 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:30.850120 2255187 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:07:30.850141 2255187 buildroot.go:174] setting up certificates
	I0911 12:07:30.850155 2255187 provision.go:83] configureAuth start
	I0911 12:07:30.850168 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.850494 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:30.853326 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.853650 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.853680 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.853864 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.856233 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.856574 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.856639 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.856755 2255187 provision.go:138] copyHostCerts
	I0911 12:07:30.856844 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:07:30.856859 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:07:30.856933 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:07:30.857039 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:07:30.857050 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:07:30.857078 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:07:30.857143 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:07:30.857150 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:07:30.857170 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:07:30.857217 2255187 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.embed-certs-235462 san=[192.168.50.96 192.168.50.96 localhost 127.0.0.1 minikube embed-certs-235462]
	I0911 12:07:30.996533 2255187 provision.go:172] copyRemoteCerts
	I0911 12:07:30.996607 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:07:30.996643 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.999950 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.000333 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.000370 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.000514 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.000787 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.000978 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.001133 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.095524 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 12:07:31.121456 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:07:31.145813 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0911 12:07:31.171621 2255187 provision.go:86] duration metric: configureAuth took 321.448095ms
	I0911 12:07:31.171657 2255187 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:07:31.171880 2255187 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:07:31.171975 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.175276 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.175783 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.175819 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.176082 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.176356 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.176535 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.176724 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.177014 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:31.177500 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:31.177521 2255187 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:07:31.514064 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:07:31.514090 2255187 machine.go:91] provisioned docker machine in 952.213137ms
	I0911 12:07:31.514101 2255187 start.go:300] post-start starting for "embed-certs-235462" (driver="kvm2")
	I0911 12:07:31.514135 2255187 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:07:31.514188 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.514651 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:07:31.514705 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.517108 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.517563 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.517599 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.517819 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.518053 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.518243 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.518426 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.612293 2255187 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:07:31.616991 2255187 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:07:31.617022 2255187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:07:31.617143 2255187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:07:31.617263 2255187 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:07:31.617393 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:07:31.627725 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:31.652196 2255187 start.go:303] post-start completed in 138.067305ms
	I0911 12:07:31.652232 2255187 fix.go:56] fixHost completed within 20.705348144s
	I0911 12:07:31.652264 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.655234 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.655598 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.655633 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.655810 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.656000 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.656236 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.656373 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.656547 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:31.657061 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:31.657078 2255187 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:07:31.785981 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434051.730508911
	
	I0911 12:07:31.786019 2255187 fix.go:206] guest clock: 1694434051.730508911
	I0911 12:07:31.786029 2255187 fix.go:219] Guest: 2023-09-11 12:07:31.730508911 +0000 UTC Remote: 2023-09-11 12:07:31.65223725 +0000 UTC m=+289.079171252 (delta=78.271661ms)
	I0911 12:07:31.786076 2255187 fix.go:190] guest clock delta is within tolerance: 78.271661ms
	I0911 12:07:31.786082 2255187 start.go:83] releasing machines lock for "embed-certs-235462", held for 20.839248295s
	I0911 12:07:31.786115 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.786440 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:31.789427 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.789809 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.789844 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.790024 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.790717 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.790954 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.791062 2255187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:07:31.791130 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.791177 2255187 ssh_runner.go:195] Run: cat /version.json
	I0911 12:07:31.791208 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.793991 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794359 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.794393 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794414 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794669 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.794879 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.794871 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.794913 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.795104 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.795112 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.795289 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.795291 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.795468 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.795585 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.910483 2255187 ssh_runner.go:195] Run: systemctl --version
	I0911 12:07:31.916739 2255187 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:07:32.059621 2255187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:07:32.066857 2255187 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:07:32.066955 2255187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:07:32.084365 2255187 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:07:32.084401 2255187 start.go:466] detecting cgroup driver to use...
	I0911 12:07:32.084518 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:07:32.098782 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:07:32.111344 2255187 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:07:32.111421 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:07:32.124323 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:07:32.137910 2255187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:07:32.244478 2255187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:07:32.374160 2255187 docker.go:212] disabling docker service ...
	I0911 12:07:32.374262 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:07:32.387762 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:07:32.401120 2255187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:07:32.522150 2255187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:07:31.815672 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Start
	I0911 12:07:31.815900 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring networks are active...
	I0911 12:07:31.816771 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring network default is active
	I0911 12:07:31.817161 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring network mk-old-k8s-version-642215 is active
	I0911 12:07:31.817559 2255304 main.go:141] libmachine: (old-k8s-version-642215) Getting domain xml...
	I0911 12:07:31.818275 2255304 main.go:141] libmachine: (old-k8s-version-642215) Creating domain...
	I0911 12:07:32.639647 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:07:32.658106 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:07:32.677573 2255187 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:07:32.677658 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.687407 2255187 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:07:32.687499 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.697706 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.707515 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.718090 2255187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:07:32.728668 2255187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:07:32.737652 2255187 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:07:32.737732 2255187 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:07:32.751510 2255187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:07:32.760774 2255187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:07:32.881718 2255187 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:07:33.064736 2255187 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:07:33.064859 2255187 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:07:33.071112 2255187 start.go:534] Will wait 60s for crictl version
	I0911 12:07:33.071195 2255187 ssh_runner.go:195] Run: which crictl
	I0911 12:07:33.075202 2255187 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:07:33.111795 2255187 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:07:33.111898 2255187 ssh_runner.go:195] Run: crio --version
	I0911 12:07:33.162455 2255187 ssh_runner.go:195] Run: crio --version
	I0911 12:07:33.224538 2255187 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:07:33.226156 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:33.229640 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:33.230164 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:33.230202 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:33.230434 2255187 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0911 12:07:33.235232 2255187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:33.248016 2255187 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:07:33.248094 2255187 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:33.290506 2255187 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:07:33.290594 2255187 ssh_runner.go:195] Run: which lz4
	I0911 12:07:33.294802 2255187 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:07:33.299115 2255187 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:07:33.299169 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:07:35.241115 2255187 crio.go:444] Took 1.946355 seconds to copy over tarball
	I0911 12:07:35.241211 2255187 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:07:33.131519 2255304 main.go:141] libmachine: (old-k8s-version-642215) Waiting to get IP...
	I0911 12:07:33.132551 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.133144 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.133255 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.133123 2256281 retry.go:31] will retry after 206.885556ms: waiting for machine to come up
	I0911 12:07:33.341966 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.342472 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.342495 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.342420 2256281 retry.go:31] will retry after 235.74047ms: waiting for machine to come up
	I0911 12:07:33.580161 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.580683 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.580720 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.580644 2256281 retry.go:31] will retry after 407.752379ms: waiting for machine to come up
	I0911 12:07:33.990505 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.991033 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.991099 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.991019 2256281 retry.go:31] will retry after 579.085044ms: waiting for machine to come up
	I0911 12:07:34.571958 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:34.572419 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:34.572451 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:34.572371 2256281 retry.go:31] will retry after 584.464544ms: waiting for machine to come up
	I0911 12:07:35.158152 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:35.158644 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:35.158677 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:35.158579 2256281 retry.go:31] will retry after 750.2868ms: waiting for machine to come up
	I0911 12:07:35.910364 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:35.910949 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:35.910983 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:35.910887 2256281 retry.go:31] will retry after 981.989906ms: waiting for machine to come up
	I0911 12:07:36.894691 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:36.895196 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:36.895233 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:36.895151 2256281 retry.go:31] will retry after 1.082443232s: waiting for machine to come up
	I0911 12:07:37.979265 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:37.979773 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:37.979802 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:37.979699 2256281 retry.go:31] will retry after 1.429811083s: waiting for machine to come up
	I0911 12:07:38.272328 2255187 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.031081597s)
	I0911 12:07:38.272378 2255187 crio.go:451] Took 3.031222 seconds to extract the tarball
	I0911 12:07:38.272392 2255187 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:07:38.314797 2255187 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:38.363925 2255187 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:07:38.363956 2255187 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:07:38.364034 2255187 ssh_runner.go:195] Run: crio config
	I0911 12:07:38.433884 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:07:38.433915 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:07:38.433941 2255187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:07:38.433969 2255187 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.96 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-235462 NodeName:embed-certs-235462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:07:38.434156 2255187 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-235462"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:07:38.434250 2255187 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-235462 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-235462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:07:38.434339 2255187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:07:38.447171 2255187 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:07:38.447273 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:07:38.459426 2255187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0911 12:07:38.478081 2255187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:07:38.495571 2255187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0911 12:07:38.514602 2255187 ssh_runner.go:195] Run: grep 192.168.50.96	control-plane.minikube.internal$ /etc/hosts
	I0911 12:07:38.518616 2255187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:38.531178 2255187 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462 for IP: 192.168.50.96
	I0911 12:07:38.531246 2255187 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:07:38.531410 2255187 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:07:38.531471 2255187 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:07:38.531565 2255187 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/client.key
	I0911 12:07:38.531650 2255187 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.key.8e4e34e1
	I0911 12:07:38.531705 2255187 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.key
	I0911 12:07:38.531860 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:07:38.531918 2255187 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:07:38.531933 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:07:38.531976 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:07:38.532020 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:07:38.532071 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:07:38.532140 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:38.532870 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:07:38.558426 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0911 12:07:38.582526 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:07:38.606798 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:07:38.630691 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:07:38.655580 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:07:38.682355 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:07:38.707701 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:07:38.732346 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:07:38.757688 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:07:38.783458 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:07:38.808481 2255187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:07:38.825822 2255187 ssh_runner.go:195] Run: openssl version
	I0911 12:07:38.831897 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:07:38.842170 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.847385 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.847467 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.853456 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:07:38.864049 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:07:38.874236 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.879391 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.879463 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.885352 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:07:38.895225 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:07:38.905599 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.910660 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.910748 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.916920 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:07:38.927096 2255187 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:07:38.932313 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:07:38.939081 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:07:38.946028 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:07:38.952644 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:07:38.959391 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:07:38.965871 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:07:38.972698 2255187 kubeadm.go:404] StartCluster: {Name:embed-certs-235462 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-235462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:07:38.972838 2255187 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:07:38.972906 2255187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:07:39.006683 2255187 cri.go:89] found id: ""
	I0911 12:07:39.006780 2255187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:07:39.017143 2255187 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:07:39.017173 2255187 kubeadm.go:636] restartCluster start
	I0911 12:07:39.017256 2255187 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:07:39.029483 2255187 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.031111 2255187 kubeconfig.go:92] found "embed-certs-235462" server: "https://192.168.50.96:8443"
	I0911 12:07:39.034708 2255187 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:07:39.046851 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.046919 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.058732 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.058756 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.058816 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.070011 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.570811 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.570945 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.583538 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:40.071137 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:40.071264 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:40.083997 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:40.570532 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:40.570646 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:40.583202 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:41.070241 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:41.070369 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:41.082992 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:41.570284 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:41.570420 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:41.582669 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:42.070231 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:42.070341 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:42.086964 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:42.570487 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:42.570592 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:42.582618 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.411715 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:39.412168 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:39.412203 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:39.412129 2256281 retry.go:31] will retry after 2.048771803s: waiting for machine to come up
	I0911 12:07:41.463672 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:41.464124 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:41.464160 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:41.464061 2256281 retry.go:31] will retry after 2.459765131s: waiting for machine to come up
	I0911 12:07:43.071070 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:43.071249 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:43.087309 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:43.570993 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:43.571105 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:43.586884 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:44.070402 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:44.070525 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:44.082541 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:44.571170 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:44.571303 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:44.583295 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:45.070902 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:45.071002 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:45.087666 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:45.570274 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:45.570400 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:45.587352 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:46.070596 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:46.070729 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:46.082939 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:46.570445 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:46.570559 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:46.582782 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:47.070351 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:47.070485 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:47.082518 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:47.571060 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:47.571155 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:47.583891 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:43.926561 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:43.926941 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:43.926983 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:43.926918 2256281 retry.go:31] will retry after 2.467825155s: waiting for machine to come up
	I0911 12:07:46.396258 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:46.396703 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:46.396736 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:46.396622 2256281 retry.go:31] will retry after 3.885293775s: waiting for machine to come up
	I0911 12:07:48.070904 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:48.070994 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:48.083706 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:48.570268 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:48.570404 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:48.582255 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:49.047880 2255187 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:07:49.047929 2255187 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:07:49.047951 2255187 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:07:49.048052 2255187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:07:49.081907 2255187 cri.go:89] found id: ""
	I0911 12:07:49.082024 2255187 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:07:49.099563 2255187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:07:49.109373 2255187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:07:49.109450 2255187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:07:49.119162 2255187 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:07:49.119210 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:49.251091 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:49.995928 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.192421 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.288496 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.365849 2255187 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:07:50.365943 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.383262 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.901757 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:51.401967 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:51.901613 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:52.402067 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.285991 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:50.286515 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:50.286547 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:50.286433 2256281 retry.go:31] will retry after 3.948880306s: waiting for machine to come up
	I0911 12:07:55.614569 2255814 start.go:369] acquired machines lock for "default-k8s-diff-port-484027" in 2m57.464444695s
	I0911 12:07:55.614642 2255814 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:55.614662 2255814 fix.go:54] fixHost starting: 
	I0911 12:07:55.615164 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:55.615208 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:55.635996 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I0911 12:07:55.636556 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:55.637268 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:07:55.637295 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:55.637758 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:55.638000 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:07:55.638191 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:07:55.640059 2255814 fix.go:102] recreateIfNeeded on default-k8s-diff-port-484027: state=Stopped err=<nil>
	I0911 12:07:55.640086 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	W0911 12:07:55.640254 2255814 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:55.643100 2255814 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-484027" ...
	I0911 12:07:54.236661 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.237200 2255304 main.go:141] libmachine: (old-k8s-version-642215) Found IP for machine: 192.168.61.58
	I0911 12:07:54.237226 2255304 main.go:141] libmachine: (old-k8s-version-642215) Reserving static IP address...
	I0911 12:07:54.237241 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has current primary IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.237676 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "old-k8s-version-642215", mac: "52:54:00:4e:60:8b", ip: "192.168.61.58"} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.237717 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | skip adding static IP to network mk-old-k8s-version-642215 - found existing host DHCP lease matching {name: "old-k8s-version-642215", mac: "52:54:00:4e:60:8b", ip: "192.168.61.58"}
	I0911 12:07:54.237736 2255304 main.go:141] libmachine: (old-k8s-version-642215) Reserved static IP address: 192.168.61.58
	I0911 12:07:54.237756 2255304 main.go:141] libmachine: (old-k8s-version-642215) Waiting for SSH to be available...
	I0911 12:07:54.237773 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Getting to WaitForSSH function...
	I0911 12:07:54.240007 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.240469 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.240521 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.240610 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Using SSH client type: external
	I0911 12:07:54.240642 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa (-rw-------)
	I0911 12:07:54.240679 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:07:54.240700 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | About to run SSH command:
	I0911 12:07:54.240715 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | exit 0
	I0911 12:07:54.337416 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | SSH cmd err, output: <nil>: 
	I0911 12:07:54.337857 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetConfigRaw
	I0911 12:07:54.338666 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:54.341640 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.341973 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.342025 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.342296 2255304 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/config.json ...
	I0911 12:07:54.342549 2255304 machine.go:88] provisioning docker machine ...
	I0911 12:07:54.342573 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:54.342809 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.342965 2255304 buildroot.go:166] provisioning hostname "old-k8s-version-642215"
	I0911 12:07:54.342986 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.343133 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.345466 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.345848 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.345881 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.346024 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.346214 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.346366 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.346491 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.346713 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.347165 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.347184 2255304 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642215 && echo "old-k8s-version-642215" | sudo tee /etc/hostname
	I0911 12:07:54.487005 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642215
	
	I0911 12:07:54.487058 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.489843 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.490146 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.490175 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.490378 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.490603 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.490774 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.490931 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.491146 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.491586 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.491612 2255304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642215/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:07:54.631441 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:54.631474 2255304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:07:54.631500 2255304 buildroot.go:174] setting up certificates
	I0911 12:07:54.631513 2255304 provision.go:83] configureAuth start
	I0911 12:07:54.631525 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.631988 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:54.634992 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.635411 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.635448 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.635700 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.638219 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.638608 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.638646 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.638788 2255304 provision.go:138] copyHostCerts
	I0911 12:07:54.638870 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:07:54.638881 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:07:54.638957 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:07:54.639087 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:07:54.639099 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:07:54.639128 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:07:54.639278 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:07:54.639293 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:07:54.639322 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:07:54.639405 2255304 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642215 san=[192.168.61.58 192.168.61.58 localhost 127.0.0.1 minikube old-k8s-version-642215]
	I0911 12:07:54.792963 2255304 provision.go:172] copyRemoteCerts
	I0911 12:07:54.793027 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:07:54.793056 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.796196 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.796555 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.796592 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.796884 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.797124 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.797410 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.797620 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:54.895690 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0911 12:07:54.923392 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 12:07:54.951276 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:07:54.979345 2255304 provision.go:86] duration metric: configureAuth took 347.814948ms
	I0911 12:07:54.979383 2255304 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:07:54.979690 2255304 config.go:182] Loaded profile config "old-k8s-version-642215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0911 12:07:54.979805 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.982955 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.983405 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.983448 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.983618 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.983822 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.984020 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.984190 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.984377 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.984924 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.984948 2255304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:07:55.330958 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:07:55.330995 2255304 machine.go:91] provisioned docker machine in 988.429681ms
	I0911 12:07:55.331008 2255304 start.go:300] post-start starting for "old-k8s-version-642215" (driver="kvm2")
	I0911 12:07:55.331021 2255304 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:07:55.331049 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.331490 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:07:55.331536 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.334936 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.335425 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.335467 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.335645 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.335902 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.336075 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.336290 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.439126 2255304 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:07:55.445330 2255304 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:07:55.445370 2255304 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:07:55.445453 2255304 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:07:55.445564 2255304 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:07:55.445692 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:07:55.455235 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:55.480979 2255304 start.go:303] post-start completed in 149.950869ms
	I0911 12:07:55.481014 2255304 fix.go:56] fixHost completed within 23.694753941s
	I0911 12:07:55.481046 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.484222 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.484612 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.484647 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.484879 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.485159 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.485352 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.485527 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.485696 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:55.486109 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:55.486122 2255304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:07:55.614312 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434075.554093051
	
	I0911 12:07:55.614344 2255304 fix.go:206] guest clock: 1694434075.554093051
	I0911 12:07:55.614355 2255304 fix.go:219] Guest: 2023-09-11 12:07:55.554093051 +0000 UTC Remote: 2023-09-11 12:07:55.481020512 +0000 UTC m=+302.412352865 (delta=73.072539ms)
	I0911 12:07:55.614409 2255304 fix.go:190] guest clock delta is within tolerance: 73.072539ms
	I0911 12:07:55.614423 2255304 start.go:83] releasing machines lock for "old-k8s-version-642215", held for 23.828210342s
	I0911 12:07:55.614465 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.614816 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:55.617993 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.618444 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.618489 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.618674 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619275 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619473 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619611 2255304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:07:55.619674 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.619732 2255304 ssh_runner.go:195] Run: cat /version.json
	I0911 12:07:55.619767 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.622428 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.622846 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.622873 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.622894 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.623012 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.623191 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.623279 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.623302 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.623399 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.623465 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.623543 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.623615 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.623747 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.623891 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.742462 2255304 ssh_runner.go:195] Run: systemctl --version
	I0911 12:07:55.748982 2255304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:07:55.906639 2255304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:07:55.914088 2255304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:07:55.914183 2255304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:07:55.938200 2255304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:07:55.938240 2255304 start.go:466] detecting cgroup driver to use...
	I0911 12:07:55.938333 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:07:55.965549 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:07:55.986227 2255304 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:07:55.986308 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:07:56.003370 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:07:56.025702 2255304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:07:56.158835 2255304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:07:56.311687 2255304 docker.go:212] disabling docker service ...
	I0911 12:07:56.311770 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:07:56.337492 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:07:56.355858 2255304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:07:56.486823 2255304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:07:56.617414 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:07:56.634057 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:07:56.658242 2255304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0911 12:07:56.658370 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.670146 2255304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:07:56.670252 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.681790 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.695832 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.707434 2255304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:07:56.718631 2255304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:07:56.729355 2255304 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:07:56.729436 2255304 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:07:56.744591 2255304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:07:56.755374 2255304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:07:56.906693 2255304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:07:57.131296 2255304 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:07:57.131439 2255304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:07:57.137554 2255304 start.go:534] Will wait 60s for crictl version
	I0911 12:07:57.137645 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:07:57.141720 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:07:57.178003 2255304 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:07:57.178110 2255304 ssh_runner.go:195] Run: crio --version
	I0911 12:07:57.236871 2255304 ssh_runner.go:195] Run: crio --version
	I0911 12:07:57.303639 2255304 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0911 12:07:52.901170 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:53.401940 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:53.430776 2255187 api_server.go:72] duration metric: took 3.064926262s to wait for apiserver process to appear ...
	I0911 12:07:53.430809 2255187 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:07:53.430837 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:53.431478 2255187 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0911 12:07:53.431528 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:53.431982 2255187 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0911 12:07:53.932765 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.216903 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:07:56.216947 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:07:56.216964 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.322957 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:07:56.322994 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:07:56.432419 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.444961 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:07:56.445016 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:07:56.932209 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.942202 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:07:56.942242 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:07:57.432361 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:57.440671 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 200:
	ok
	I0911 12:07:57.453348 2255187 api_server.go:141] control plane version: v1.28.1
	I0911 12:07:57.453393 2255187 api_server.go:131] duration metric: took 4.0225758s to wait for apiserver health ...
	I0911 12:07:57.453408 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:07:57.453418 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:07:57.455939 2255187 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:07:57.457968 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:07:57.488156 2255187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:07:57.524742 2255187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:07:57.543532 2255187 system_pods.go:59] 8 kube-system pods found
	I0911 12:07:57.543601 2255187 system_pods.go:61] "coredns-5dd5756b68-pkzcf" [4a44c7ec-bb5b-40f0-8d44-d5b77666cb95] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:07:57.543616 2255187 system_pods.go:61] "etcd-embed-certs-235462" [c14f9910-0d1d-4494-9ebe-97173ab9abe9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:07:57.543671 2255187 system_pods.go:61] "kube-apiserver-embed-certs-235462" [4d95f49f-f9ad-40ce-9101-7e67ad978353] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:07:57.543686 2255187 system_pods.go:61] "kube-controller-manager-embed-certs-235462" [753eea69-23f4-46f8-b631-36cf0f34d663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:07:57.543701 2255187 system_pods.go:61] "kube-proxy-v24dz" [e527b198-cf8f-4ada-af22-7979b249efd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:07:57.543711 2255187 system_pods.go:61] "kube-scheduler-embed-certs-235462" [b092d336-c45d-4b2c-87a5-df253a5fddbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:07:57.543722 2255187 system_pods.go:61] "metrics-server-57f55c9bc5-ldjwn" [4761a51f-8912-4be4-aa1d-2574e10da791] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:07:57.543735 2255187 system_pods.go:61] "storage-provisioner" [810336ff-14a1-4b3d-a4ff-2569f3710bab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:07:57.543744 2255187 system_pods.go:74] duration metric: took 18.975758ms to wait for pod list to return data ...
	I0911 12:07:57.543770 2255187 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:07:57.550468 2255187 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:07:57.550512 2255187 node_conditions.go:123] node cpu capacity is 2
	I0911 12:07:57.550527 2255187 node_conditions.go:105] duration metric: took 6.741621ms to run NodePressure ...
	I0911 12:07:57.550552 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:55.644857 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Start
	I0911 12:07:55.645094 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring networks are active...
	I0911 12:07:55.646010 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring network default is active
	I0911 12:07:55.646393 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring network mk-default-k8s-diff-port-484027 is active
	I0911 12:07:55.646808 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Getting domain xml...
	I0911 12:07:55.647513 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Creating domain...
	I0911 12:07:57.083879 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting to get IP...
	I0911 12:07:57.084769 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.085290 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.085361 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.085279 2256448 retry.go:31] will retry after 226.596764ms: waiting for machine to come up
	I0911 12:07:57.313593 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.314083 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.314106 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.314029 2256448 retry.go:31] will retry after 315.605673ms: waiting for machine to come up
	I0911 12:07:57.631774 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.632292 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.632329 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.632179 2256448 retry.go:31] will retry after 400.211275ms: waiting for machine to come up
	I0911 12:07:58.034189 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.305610 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:57.309276 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:57.309677 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:57.309721 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:57.310066 2255304 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0911 12:07:57.316611 2255304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:57.335580 2255304 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0911 12:07:57.335689 2255304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:57.380592 2255304 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0911 12:07:57.380690 2255304 ssh_runner.go:195] Run: which lz4
	I0911 12:07:57.386023 2255304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:07:57.391807 2255304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:07:57.391861 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0911 12:07:58.002314 2255187 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:07:58.010948 2255187 kubeadm.go:787] kubelet initialised
	I0911 12:07:58.010981 2255187 kubeadm.go:788] duration metric: took 8.627903ms waiting for restarted kubelet to initialise ...
	I0911 12:07:58.010993 2255187 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:07:58.020253 2255187 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.027844 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.027876 2255187 pod_ready.go:81] duration metric: took 7.583678ms waiting for pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.027888 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.027900 2255187 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.050283 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "etcd-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.050321 2255187 pod_ready.go:81] duration metric: took 22.413628ms waiting for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.050352 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "etcd-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.050369 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.060314 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.060348 2255187 pod_ready.go:81] duration metric: took 9.962502ms waiting for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.060360 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.060371 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.069122 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.069152 2255187 pod_ready.go:81] duration metric: took 8.771982ms waiting for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.069164 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.069176 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v24dz" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:59.329758 2255187 pod_ready.go:92] pod "kube-proxy-v24dz" in "kube-system" namespace has status "Ready":"True"
	I0911 12:07:59.329789 2255187 pod_ready.go:81] duration metric: took 1.260592229s waiting for pod "kube-proxy-v24dz" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:59.329804 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:01.526483 2255187 pod_ready.go:102] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"False"
	I0911 12:07:58.034838 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.037141 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:58.034724 2256448 retry.go:31] will retry after 394.484585ms: waiting for machine to come up
	I0911 12:07:58.431365 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.431982 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.432004 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:58.431886 2256448 retry.go:31] will retry after 593.506569ms: waiting for machine to come up
	I0911 12:07:59.026841 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.027490 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.027518 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:59.027389 2256448 retry.go:31] will retry after 666.166785ms: waiting for machine to come up
	I0911 12:07:59.694652 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.695161 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.695191 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:59.695113 2256448 retry.go:31] will retry after 975.320046ms: waiting for machine to come up
	I0911 12:08:00.672258 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:00.672804 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:00.672851 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:00.672755 2256448 retry.go:31] will retry after 1.161656415s: waiting for machine to come up
	I0911 12:08:01.835653 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:01.836186 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:01.836223 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:01.836130 2256448 retry.go:31] will retry after 1.505608393s: waiting for machine to come up
	I0911 12:07:59.503695 2255304 crio.go:444] Took 2.117718 seconds to copy over tarball
	I0911 12:07:59.503800 2255304 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:08:02.939001 2255304 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.435164165s)
	I0911 12:08:02.939037 2255304 crio.go:451] Took 3.435307 seconds to extract the tarball
	I0911 12:08:02.939050 2255304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:08:02.984446 2255304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:03.037419 2255304 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0911 12:08:03.037452 2255304 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 12:08:03.037546 2255304 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.037582 2255304 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.037597 2255304 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.037628 2255304 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.037583 2255304 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.037607 2255304 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0911 12:08:03.037551 2255304 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.037549 2255304 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.039413 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.039639 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.039650 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.039650 2255304 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.039819 2255304 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.039854 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.040031 2255304 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.040241 2255304 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0911 12:08:03.815561 2255187 pod_ready.go:102] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:04.614171 2255187 pod_ready.go:92] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:04.614199 2255187 pod_ready.go:81] duration metric: took 5.28438743s waiting for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:04.614211 2255187 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:06.638688 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:03.343936 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:03.353931 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:03.353970 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:03.344315 2256448 retry.go:31] will retry after 1.414606279s: waiting for machine to come up
	I0911 12:08:04.761183 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:04.761667 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:04.761695 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:04.761607 2256448 retry.go:31] will retry after 1.846261641s: waiting for machine to come up
	I0911 12:08:06.609258 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:06.609917 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:06.609965 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:06.609851 2256448 retry.go:31] will retry after 2.938814697s: waiting for machine to come up
	I0911 12:08:03.225129 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.227566 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.231565 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.233817 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.239841 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0911 12:08:03.243250 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.247155 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.522779 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.711354 2255304 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0911 12:08:03.711381 2255304 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0911 12:08:03.711438 2255304 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0911 12:08:03.711473 2255304 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.711501 2255304 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0911 12:08:03.711514 2255304 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0911 12:08:03.711530 2255304 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0911 12:08:03.711602 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711641 2255304 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0911 12:08:03.711678 2255304 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.711735 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711536 2255304 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.711823 2255304 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0911 12:08:03.711854 2255304 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.711856 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711894 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711475 2255304 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.711934 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711541 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711474 2255304 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.712005 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.823116 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0911 12:08:03.823136 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.823232 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.823349 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.823374 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.823429 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.823499 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.957383 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0911 12:08:03.957459 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0911 12:08:03.957513 2255304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0911 12:08:03.957521 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0911 12:08:03.957564 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0911 12:08:03.957649 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0911 12:08:03.957707 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0911 12:08:03.957743 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0911 12:08:03.962841 2255304 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0911 12:08:03.962863 2255304 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0911 12:08:03.962905 2255304 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0911 12:08:05.018464 2255304 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.055478429s)
	I0911 12:08:05.018510 2255304 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0911 12:08:05.018571 2255304 cache_images.go:92] LoadImages completed in 1.981102195s
	W0911 12:08:05.018661 2255304 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0911 12:08:05.018747 2255304 ssh_runner.go:195] Run: crio config
	I0911 12:08:05.107550 2255304 cni.go:84] Creating CNI manager for ""
	I0911 12:08:05.107585 2255304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:05.107614 2255304 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:08:05.107641 2255304 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.58 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642215 NodeName:old-k8s-version-642215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0911 12:08:05.107908 2255304 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642215"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-642215
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.58:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:08:05.108027 2255304 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642215 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-642215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:08:05.108106 2255304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0911 12:08:05.120210 2255304 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:08:05.120311 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:08:05.129517 2255304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0911 12:08:05.151855 2255304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:08:05.169543 2255304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0911 12:08:05.190304 2255304 ssh_runner.go:195] Run: grep 192.168.61.58	control-plane.minikube.internal$ /etc/hosts
	I0911 12:08:05.196014 2255304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:05.211627 2255304 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215 for IP: 192.168.61.58
	I0911 12:08:05.211663 2255304 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:05.211876 2255304 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:08:05.211943 2255304 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:08:05.212043 2255304 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/client.key
	I0911 12:08:05.212130 2255304 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.key.7152e027
	I0911 12:08:05.212217 2255304 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.key
	I0911 12:08:05.212397 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:08:05.212451 2255304 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:08:05.212467 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:08:05.212500 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:08:05.212531 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:08:05.212568 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:08:05.212637 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:05.213373 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:08:05.242362 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 12:08:05.272949 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:08:05.299359 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:08:05.326203 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:08:05.354388 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:08:05.385150 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:08:05.415683 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:08:05.449119 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:08:05.476397 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:08:05.503652 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:08:05.531520 2255304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:08:05.550108 2255304 ssh_runner.go:195] Run: openssl version
	I0911 12:08:05.556982 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:08:05.569083 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.574490 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.574570 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.581479 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:08:05.596824 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:08:05.607900 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.613627 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.613711 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.620309 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:08:05.630995 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:08:05.645786 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.652682 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.652773 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.660784 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:08:05.675417 2255304 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:08:05.681969 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:08:05.690345 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:08:05.697454 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:08:05.706283 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:08:05.712913 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:08:05.719308 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:08:05.726307 2255304 kubeadm.go:404] StartCluster: {Name:old-k8s-version-642215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-642215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:08:05.726414 2255304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:08:05.726478 2255304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:05.765092 2255304 cri.go:89] found id: ""
	I0911 12:08:05.765172 2255304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:08:05.775654 2255304 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:08:05.775681 2255304 kubeadm.go:636] restartCluster start
	I0911 12:08:05.775749 2255304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:08:05.785235 2255304 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:05.786289 2255304 kubeconfig.go:92] found "old-k8s-version-642215" server: "https://192.168.61.58:8443"
	I0911 12:08:05.789768 2255304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:08:05.799009 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:05.799092 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:05.811208 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:05.811235 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:05.811301 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:05.822223 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:06.322909 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:06.323053 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:06.337866 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:06.823220 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:06.823328 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:06.839573 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:07.323145 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:07.323245 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:07.335054 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:07.822427 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:07.822536 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:07.834385 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.146768 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:11.637314 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:09.552075 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:09.552494 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:09.552520 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:09.552442 2256448 retry.go:31] will retry after 3.623277093s: waiting for machine to come up
	I0911 12:08:08.323215 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:08.323301 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:08.335501 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:08.822942 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:08.823061 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:08.840055 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.322586 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:09.322692 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:09.338101 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.822702 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:09.822845 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:09.835245 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:10.322666 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:10.322750 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:10.337101 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:10.822530 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:10.822662 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:10.838511 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:11.323206 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:11.323329 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:11.338239 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:11.822952 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:11.823044 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:11.838752 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:12.323296 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:12.323384 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:12.335174 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:12.822659 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:12.822775 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:12.834762 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:13.637784 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:16.138584 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:13.178553 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:13.179008 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:13.179041 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:13.178961 2256448 retry.go:31] will retry after 3.636806595s: waiting for machine to come up
	I0911 12:08:16.818087 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.818548 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has current primary IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.818583 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Found IP for machine: 192.168.39.230
	I0911 12:08:16.818600 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Reserving static IP address...
	I0911 12:08:16.819118 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-484027", mac: "52:54:00:b1:16:75", ip: "192.168.39.230"} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.819156 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Reserved static IP address: 192.168.39.230
	I0911 12:08:16.819182 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | skip adding static IP to network mk-default-k8s-diff-port-484027 - found existing host DHCP lease matching {name: "default-k8s-diff-port-484027", mac: "52:54:00:b1:16:75", ip: "192.168.39.230"}
	I0911 12:08:16.819204 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Getting to WaitForSSH function...
	I0911 12:08:16.819221 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for SSH to be available...
	I0911 12:08:16.821746 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.822235 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.822270 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.822454 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Using SSH client type: external
	I0911 12:08:16.822500 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa (-rw-------)
	I0911 12:08:16.822551 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:08:16.822576 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | About to run SSH command:
	I0911 12:08:16.822590 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | exit 0
	I0911 12:08:16.957464 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | SSH cmd err, output: <nil>: 
	I0911 12:08:16.957845 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetConfigRaw
	I0911 12:08:16.958573 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:16.961262 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.961726 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.961762 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.962073 2255814 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/config.json ...
	I0911 12:08:16.962281 2255814 machine.go:88] provisioning docker machine ...
	I0911 12:08:16.962301 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:16.962594 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:16.962777 2255814 buildroot.go:166] provisioning hostname "default-k8s-diff-port-484027"
	I0911 12:08:16.962799 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:16.962971 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:16.965571 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.966095 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.966134 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.966313 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:16.966531 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:16.966685 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:16.966837 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:16.967021 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:16.967739 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:16.967764 2255814 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-484027 && echo "default-k8s-diff-port-484027" | sudo tee /etc/hostname
	I0911 12:08:17.106967 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-484027
	
	I0911 12:08:17.107036 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.110243 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.110663 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.110737 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.110953 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.111197 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.111388 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.111526 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.111782 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:17.112200 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:17.112223 2255814 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-484027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-484027/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-484027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:08:17.238410 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:08:17.238450 2255814 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:08:17.238508 2255814 buildroot.go:174] setting up certificates
	I0911 12:08:17.238520 2255814 provision.go:83] configureAuth start
	I0911 12:08:17.238536 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:17.238938 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:17.241635 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.242044 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.242106 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.242209 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.244737 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.245093 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.245117 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.245295 2255814 provision.go:138] copyHostCerts
	I0911 12:08:17.245360 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:08:17.245375 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:08:17.245434 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:08:17.245530 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:08:17.245537 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:08:17.245557 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:08:17.245627 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:08:17.245634 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:08:17.245651 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:08:17.245708 2255814 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-484027 san=[192.168.39.230 192.168.39.230 localhost 127.0.0.1 minikube default-k8s-diff-port-484027]
	I0911 12:08:17.540142 2255814 provision.go:172] copyRemoteCerts
	I0911 12:08:17.540233 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:08:17.540270 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.543823 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.544237 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.544277 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.544485 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.544706 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.544916 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.545060 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:17.645425 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:08:17.675288 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0911 12:08:17.703043 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 12:08:17.732683 2255814 provision.go:86] duration metric: configureAuth took 494.12506ms
	I0911 12:08:17.732713 2255814 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:08:17.732955 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:17.733076 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.736740 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.737204 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.737244 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.737464 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.737707 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.737914 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.738084 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.738324 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:17.738749 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:17.738774 2255814 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:08:13.323070 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:13.323174 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:13.334828 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:13.822403 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:13.822490 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:13.834374 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:14.323004 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:14.323100 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:14.334774 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:14.822351 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:14.822465 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:14.834368 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:15.323045 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:15.323154 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:15.334863 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:15.799700 2255304 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:08:15.799736 2255304 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:08:15.799751 2255304 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:08:15.799821 2255304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:15.831051 2255304 cri.go:89] found id: ""
	I0911 12:08:15.831140 2255304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:08:15.847072 2255304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:08:15.856353 2255304 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:08:15.856425 2255304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:15.865711 2255304 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:15.865740 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:15.990047 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.312314 2255304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.322225408s)
	I0911 12:08:17.312354 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.521733 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.627343 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.723857 2255304 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:17.723964 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:17.742688 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:18.336038 2255048 start.go:369] acquired machines lock for "no-preload-352076" in 1m2.388468349s
	I0911 12:08:18.336100 2255048 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:08:18.336125 2255048 fix.go:54] fixHost starting: 
	I0911 12:08:18.336615 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:18.336663 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:18.355715 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0911 12:08:18.356243 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:18.356901 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:08:18.356931 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:18.357385 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:18.357585 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:18.357787 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:08:18.359541 2255048 fix.go:102] recreateIfNeeded on no-preload-352076: state=Stopped err=<nil>
	I0911 12:08:18.359571 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	W0911 12:08:18.359750 2255048 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:08:18.361628 2255048 out.go:177] * Restarting existing kvm2 VM for "no-preload-352076" ...
	I0911 12:08:18.363286 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Start
	I0911 12:08:18.363532 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring networks are active...
	I0911 12:08:18.364515 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring network default is active
	I0911 12:08:18.364894 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring network mk-no-preload-352076 is active
	I0911 12:08:18.365345 2255048 main.go:141] libmachine: (no-preload-352076) Getting domain xml...
	I0911 12:08:18.366191 2255048 main.go:141] libmachine: (no-preload-352076) Creating domain...
	I0911 12:08:18.078952 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:08:18.078979 2255814 machine.go:91] provisioned docker machine in 1.116684764s
	I0911 12:08:18.078991 2255814 start.go:300] post-start starting for "default-k8s-diff-port-484027" (driver="kvm2")
	I0911 12:08:18.079011 2255814 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:08:18.079057 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.079482 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:08:18.079520 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.082212 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.082641 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.082674 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.082810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.083043 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.083227 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.083403 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.170810 2255814 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:08:18.175342 2255814 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:08:18.175370 2255814 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:08:18.175457 2255814 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:08:18.175583 2255814 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:08:18.175722 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:08:18.184543 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:18.209487 2255814 start.go:303] post-start completed in 130.475291ms
	I0911 12:08:18.209516 2255814 fix.go:56] fixHost completed within 22.594854569s
	I0911 12:08:18.209540 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.212339 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.212779 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.212832 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.212967 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.213187 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.213366 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.213515 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.213680 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:18.214071 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:18.214083 2255814 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:08:18.335862 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434098.277311369
	
	I0911 12:08:18.335893 2255814 fix.go:206] guest clock: 1694434098.277311369
	I0911 12:08:18.335902 2255814 fix.go:219] Guest: 2023-09-11 12:08:18.277311369 +0000 UTC Remote: 2023-09-11 12:08:18.20951981 +0000 UTC m=+200.212950109 (delta=67.791559ms)
	I0911 12:08:18.335925 2255814 fix.go:190] guest clock delta is within tolerance: 67.791559ms
	I0911 12:08:18.335932 2255814 start.go:83] releasing machines lock for "default-k8s-diff-port-484027", held for 22.721324127s
	I0911 12:08:18.335977 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.336342 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:18.339935 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.340372 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.340411 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.340801 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341526 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341746 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341832 2255814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:08:18.341895 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.342153 2255814 ssh_runner.go:195] Run: cat /version.json
	I0911 12:08:18.342219 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.345331 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.345619 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.345716 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.345751 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.346068 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.346282 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.346367 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.346409 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.346443 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.346624 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.346803 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.346960 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.347119 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.347284 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.455877 2255814 ssh_runner.go:195] Run: systemctl --version
	I0911 12:08:18.463787 2255814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:08:18.620444 2255814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:08:18.628878 2255814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:08:18.628972 2255814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:08:18.652267 2255814 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:08:18.652301 2255814 start.go:466] detecting cgroup driver to use...
	I0911 12:08:18.652381 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:08:18.672306 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:08:18.690514 2255814 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:08:18.690594 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:08:18.709032 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:08:18.727521 2255814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:08:18.859864 2255814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:08:19.005708 2255814 docker.go:212] disabling docker service ...
	I0911 12:08:19.005809 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:08:19.026177 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:08:19.043931 2255814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:08:19.184060 2255814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:08:19.305184 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:08:19.326550 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:08:19.351313 2255814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:08:19.351400 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.366747 2255814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:08:19.366836 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.382272 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.395743 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.408786 2255814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:08:19.424229 2255814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:08:19.438367 2255814 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:08:19.438450 2255814 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:08:19.457417 2255814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:08:19.470001 2255814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:08:19.629977 2255814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:08:19.846900 2255814 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:08:19.846994 2255814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:08:19.854282 2255814 start.go:534] Will wait 60s for crictl version
	I0911 12:08:19.854378 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:08:19.859252 2255814 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:08:19.897263 2255814 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:08:19.897349 2255814 ssh_runner.go:195] Run: crio --version
	I0911 12:08:19.966155 2255814 ssh_runner.go:195] Run: crio --version
	I0911 12:08:20.024697 2255814 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:08:18.639188 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:20.649395 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:20.026156 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:20.029726 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:20.030249 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:20.030286 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:20.030572 2255814 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 12:08:20.035523 2255814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:20.053903 2255814 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:08:20.053997 2255814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:20.096570 2255814 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:08:20.096666 2255814 ssh_runner.go:195] Run: which lz4
	I0911 12:08:20.102350 2255814 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:08:20.107338 2255814 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:08:20.107385 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:08:22.215033 2255814 crio.go:444] Took 2.112735 seconds to copy over tarball
	I0911 12:08:22.215168 2255814 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:08:18.262191 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:18.762029 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:19.262094 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:19.316271 2255304 api_server.go:72] duration metric: took 1.592409696s to wait for apiserver process to appear ...
	I0911 12:08:19.316309 2255304 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:19.316329 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:19.892254 2255048 main.go:141] libmachine: (no-preload-352076) Waiting to get IP...
	I0911 12:08:19.893353 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:19.893857 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:19.893939 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:19.893867 2256639 retry.go:31] will retry after 256.490953ms: waiting for machine to come up
	I0911 12:08:20.152717 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.153686 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.153718 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.153662 2256639 retry.go:31] will retry after 308.528476ms: waiting for machine to come up
	I0911 12:08:20.464569 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.465179 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.465240 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.465150 2256639 retry.go:31] will retry after 329.79495ms: waiting for machine to come up
	I0911 12:08:20.797010 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.797581 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.797615 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.797512 2256639 retry.go:31] will retry after 388.108578ms: waiting for machine to come up
	I0911 12:08:21.187304 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:21.187980 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:21.188006 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:21.187878 2256639 retry.go:31] will retry after 547.488463ms: waiting for machine to come up
	I0911 12:08:21.736835 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:21.737425 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:21.737466 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:21.737352 2256639 retry.go:31] will retry after 669.118316ms: waiting for machine to come up
	I0911 12:08:22.407727 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:22.408435 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:22.408471 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:22.408353 2256639 retry.go:31] will retry after 986.70059ms: waiting for machine to come up
	I0911 12:08:23.139403 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:25.141299 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:27.493149 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:25.680145 2255814 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.464917771s)
	I0911 12:08:25.680187 2255814 crio.go:451] Took 3.465097 seconds to extract the tarball
	I0911 12:08:25.680201 2255814 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:08:25.721940 2255814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:25.770149 2255814 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:08:25.770189 2255814 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:08:25.770296 2255814 ssh_runner.go:195] Run: crio config
	I0911 12:08:25.844108 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:08:25.844142 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:25.844170 2255814 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:08:25.844197 2255814 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-484027 NodeName:default-k8s-diff-port-484027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:08:25.844471 2255814 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-484027"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:08:25.844584 2255814 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-484027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0911 12:08:25.844751 2255814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:08:25.855558 2255814 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:08:25.855658 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:08:25.865531 2255814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0911 12:08:25.890631 2255814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:08:25.914304 2255814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0911 12:08:25.938065 2255814 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0911 12:08:25.943138 2255814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:25.963689 2255814 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027 for IP: 192.168.39.230
	I0911 12:08:25.963744 2255814 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:25.963968 2255814 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:08:25.964026 2255814 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:08:25.964139 2255814 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.key
	I0911 12:08:25.964245 2255814 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.key.165d62e4
	I0911 12:08:25.964309 2255814 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.key
	I0911 12:08:25.964546 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:08:25.964599 2255814 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:08:25.964618 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:08:25.964655 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:08:25.964699 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:08:25.964731 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:08:25.964805 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:25.965758 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:08:26.001391 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 12:08:26.032345 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:08:26.065593 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:08:26.100792 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:08:26.135603 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:08:26.170029 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:08:26.203119 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:08:26.232040 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:08:26.262353 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:08:26.292733 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:08:26.326750 2255814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:08:26.346334 2255814 ssh_runner.go:195] Run: openssl version
	I0911 12:08:26.353175 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:08:26.365742 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.372007 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.372086 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.378954 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:08:26.390365 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:08:26.403147 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.410930 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.411048 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.419889 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:08:26.433366 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:08:26.445752 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.452481 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.452563 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.461097 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:08:26.477855 2255814 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:08:26.483947 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:08:26.492879 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:08:26.501391 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:08:26.510124 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:08:26.518732 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:08:26.527356 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:08:26.536063 2255814 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:08:26.536225 2255814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:08:26.536300 2255814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:26.575522 2255814 cri.go:89] found id: ""
	I0911 12:08:26.575617 2255814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:08:26.586011 2255814 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:08:26.586043 2255814 kubeadm.go:636] restartCluster start
	I0911 12:08:26.586114 2255814 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:08:26.596758 2255814 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:26.598534 2255814 kubeconfig.go:92] found "default-k8s-diff-port-484027" server: "https://192.168.39.230:8444"
	I0911 12:08:26.603031 2255814 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:08:26.617921 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:26.618066 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:26.632719 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:26.632739 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:26.632793 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:26.650036 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:27.150299 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:27.150397 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:27.165783 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:27.650311 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:27.650416 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:27.665184 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:24.317268 2255304 api_server.go:269] stopped: https://192.168.61.58:8443/healthz: Get "https://192.168.61.58:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0911 12:08:24.317328 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:26.742901 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:26.742942 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:27.243118 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:27.654196 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0911 12:08:27.654260 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0911 12:08:27.743438 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:27.767557 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0911 12:08:27.767607 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0911 12:08:28.243610 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:28.251858 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 200:
	ok
	I0911 12:08:28.262619 2255304 api_server.go:141] control plane version: v1.16.0
	I0911 12:08:28.262659 2255304 api_server.go:131] duration metric: took 8.946341912s to wait for apiserver health ...
	I0911 12:08:28.262670 2255304 cni.go:84] Creating CNI manager for ""
	I0911 12:08:28.262676 2255304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:28.264705 2255304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:08:23.396798 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:23.398997 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:23.399029 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:23.397251 2256639 retry.go:31] will retry after 1.384367074s: waiting for machine to come up
	I0911 12:08:24.783036 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:24.783547 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:24.783584 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:24.783489 2256639 retry.go:31] will retry after 1.172643107s: waiting for machine to come up
	I0911 12:08:25.958217 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:25.958989 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:25.959024 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:25.958929 2256639 retry.go:31] will retry after 2.243377044s: waiting for machine to come up
	I0911 12:08:28.205538 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:28.206196 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:28.206226 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:28.206137 2256639 retry.go:31] will retry after 1.83460511s: waiting for machine to come up
	I0911 12:08:28.266346 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:08:28.280404 2255304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:08:28.308228 2255304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:28.317951 2255304 system_pods.go:59] 7 kube-system pods found
	I0911 12:08:28.317994 2255304 system_pods.go:61] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:28.318002 2255304 system_pods.go:61] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:28.318010 2255304 system_pods.go:61] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:28.318024 2255304 system_pods.go:61] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Pending
	I0911 12:08:28.318030 2255304 system_pods.go:61] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:28.318035 2255304 system_pods.go:61] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:28.318039 2255304 system_pods.go:61] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:28.318045 2255304 system_pods.go:74] duration metric: took 9.788007ms to wait for pod list to return data ...
	I0911 12:08:28.318055 2255304 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:28.323536 2255304 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:28.323578 2255304 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:28.323593 2255304 node_conditions.go:105] duration metric: took 5.532859ms to run NodePressure ...
	I0911 12:08:28.323619 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:28.927871 2255304 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:08:28.938224 2255304 kubeadm.go:787] kubelet initialised
	I0911 12:08:28.938256 2255304 kubeadm.go:788] duration metric: took 10.348938ms waiting for restarted kubelet to initialise ...
	I0911 12:08:28.938267 2255304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:28.944405 2255304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.951735 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.951774 2255304 pod_ready.go:81] duration metric: took 7.334386ms waiting for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.951786 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.951800 2255304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.964451 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "etcd-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.964487 2255304 pod_ready.go:81] duration metric: took 12.678175ms waiting for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.964499 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "etcd-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.964510 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.971472 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.971503 2255304 pod_ready.go:81] duration metric: took 6.983445ms waiting for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.971514 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.971523 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.978657 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.978691 2255304 pod_ready.go:81] duration metric: took 7.156987ms waiting for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.978704 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.978728 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:29.334593 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-proxy-855lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.334652 2255304 pod_ready.go:81] duration metric: took 355.905465ms waiting for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:29.334670 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-proxy-855lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.334683 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:29.734221 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.734262 2255304 pod_ready.go:81] duration metric: took 399.567918ms waiting for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:29.734275 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.734287 2255304 pod_ready.go:38] duration metric: took 796.006553ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:29.734313 2255304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:08:29.749280 2255304 ops.go:34] apiserver oom_adj: -16
	I0911 12:08:29.749313 2255304 kubeadm.go:640] restartCluster took 23.973623788s
	I0911 12:08:29.749325 2255304 kubeadm.go:406] StartCluster complete in 24.023033441s
	I0911 12:08:29.749349 2255304 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:29.749453 2255304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:08:29.752216 2255304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:29.752582 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:08:29.752784 2255304 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:08:29.752912 2255304 config.go:182] Loaded profile config "old-k8s-version-642215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0911 12:08:29.752947 2255304 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-642215"
	I0911 12:08:29.752971 2255304 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-642215"
	I0911 12:08:29.752976 2255304 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-642215"
	I0911 12:08:29.753016 2255304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-642215"
	W0911 12:08:29.752979 2255304 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:08:29.753159 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.752984 2255304 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-642215"
	I0911 12:08:29.753232 2255304 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-642215"
	W0911 12:08:29.753281 2255304 addons.go:240] addon metrics-server should already be in state true
	I0911 12:08:29.753369 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.753517 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.753554 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.753599 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.753630 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.753954 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.754016 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.773524 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0911 12:08:29.773614 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0911 12:08:29.774181 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.774418 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.774950 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.774967 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.775141 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0911 12:08:29.775158 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.775176 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.775584 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.775585 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.775597 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.775756 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.776112 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.776144 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.776178 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.776197 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.776510 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.776970 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.776990 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.790443 2255304 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-642215" context rescaled to 1 replicas
	I0911 12:08:29.790502 2255304 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:08:29.793918 2255304 out.go:177] * Verifying Kubernetes components...
	I0911 12:08:29.796131 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:29.798116 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38829
	I0911 12:08:29.798581 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.799554 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.799580 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.800105 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.800439 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.802764 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.805061 2255304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:29.803246 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0911 12:08:29.807001 2255304 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:29.807025 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:08:29.807053 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.807866 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.807924 2255304 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-642215"
	W0911 12:08:29.807949 2255304 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:08:29.807985 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.808406 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.808442 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.809644 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.809667 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.817010 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.817046 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.817101 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.817131 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.817158 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.817555 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.817625 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.817868 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.818244 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:29.820240 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.822846 2255304 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:08:29.824505 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:08:29.824526 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:08:29.824554 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.827924 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.828359 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.828396 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.828684 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.828950 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.829099 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.829285 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:29.830900 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0911 12:08:29.831463 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.832028 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.832049 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.832646 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.833261 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.833313 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.868600 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0911 12:08:29.869171 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.869822 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.869842 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.870236 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.870416 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.872928 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.873212 2255304 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:29.873232 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:08:29.873255 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.876313 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.876963 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.876983 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.876999 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.877168 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.877331 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.877468 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:30.019745 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:30.061364 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:08:30.061393 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:08:30.080562 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:30.100494 2255304 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 12:08:30.100511 2255304 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-642215" to be "Ready" ...
	I0911 12:08:30.120618 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:08:30.120647 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:08:30.173391 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:30.173427 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:08:30.208772 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:30.757802 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.757841 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.757982 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758021 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758294 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.758334 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758344 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758353 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758366 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758377 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.758620 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758646 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758660 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758677 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758690 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758701 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758717 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758743 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758943 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758954 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.759016 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.759052 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.759062 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.859384 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.859426 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.859828 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.859853 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.859864 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.859874 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.860302 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.860336 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.860357 2255304 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-642215"
	I0911 12:08:30.862687 2255304 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0911 12:08:29.637791 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:31.639796 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:28.150174 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:28.150294 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:28.166331 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:28.650905 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:28.650996 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:28.664146 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:29.150646 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:29.150745 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:29.166569 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:29.651031 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:29.651129 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:29.664106 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.150429 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:30.150535 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:30.167297 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.650364 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:30.650458 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:30.664180 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:31.150419 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:31.150521 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:31.168242 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:31.650834 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:31.650922 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:31.664772 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:32.150232 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:32.150362 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:32.163224 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:32.650676 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:32.650773 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:32.667077 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.864433 2255304 addons.go:502] enable addons completed in 1.111642638s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0911 12:08:32.139191 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:30.042388 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:30.043026 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:30.043054 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:30.042967 2256639 retry.go:31] will retry after 3.282840664s: waiting for machine to come up
	I0911 12:08:33.327456 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:33.328007 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:33.328066 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:33.327941 2256639 retry.go:31] will retry after 4.185053881s: waiting for machine to come up
	I0911 12:08:33.639996 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:36.139377 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:33.150668 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:33.150797 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:33.163178 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:33.650733 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:33.650845 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:33.666475 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:34.150939 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:34.151037 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:34.163985 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:34.650139 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:34.650250 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:34.664850 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:35.150224 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:35.150351 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:35.169894 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:35.650946 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:35.651044 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:35.665438 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:36.151019 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:36.151134 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:36.164843 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:36.618412 2255814 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:08:36.618460 2255814 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:08:36.618482 2255814 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:08:36.618571 2255814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:36.657264 2255814 cri.go:89] found id: ""
	I0911 12:08:36.657366 2255814 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:08:36.676222 2255814 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:08:36.686832 2255814 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:08:36.686923 2255814 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:36.699618 2255814 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:36.699654 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:36.842821 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.471899 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.699214 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.784721 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.870994 2255814 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:37.871085 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:37.894561 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:34.638777 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:37.138575 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:37.515376 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:37.515955 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:37.515989 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:37.515896 2256639 retry.go:31] will retry after 3.472863196s: waiting for machine to come up
	I0911 12:08:38.138433 2255304 node_ready.go:49] node "old-k8s-version-642215" has status "Ready":"True"
	I0911 12:08:38.138464 2255304 node_ready.go:38] duration metric: took 8.037923512s waiting for node "old-k8s-version-642215" to be "Ready" ...
	I0911 12:08:38.138475 2255304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:38.143177 2255304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.664411 2255304 pod_ready.go:92] pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.664449 2255304 pod_ready.go:81] duration metric: took 521.244524ms waiting for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.664463 2255304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.670838 2255304 pod_ready.go:92] pod "etcd-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.670876 2255304 pod_ready.go:81] duration metric: took 6.404356ms waiting for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.670890 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.679254 2255304 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.679284 2255304 pod_ready.go:81] duration metric: took 8.385069ms waiting for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.679299 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.939484 2255304 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.939514 2255304 pod_ready.go:81] duration metric: took 260.206232ms waiting for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.939529 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.337858 2255304 pod_ready.go:92] pod "kube-proxy-855lt" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:39.337894 2255304 pod_ready.go:81] duration metric: took 398.358394ms waiting for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.337907 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.738437 2255304 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:39.738465 2255304 pod_ready.go:81] duration metric: took 400.549428ms waiting for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.738479 2255304 pod_ready.go:38] duration metric: took 1.599991385s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:39.738509 2255304 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:39.738569 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.760727 2255304 api_server.go:72] duration metric: took 9.970181642s to wait for apiserver process to appear ...
	I0911 12:08:39.760774 2255304 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:39.760797 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:39.768195 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 200:
	ok
	I0911 12:08:39.769416 2255304 api_server.go:141] control plane version: v1.16.0
	I0911 12:08:39.769442 2255304 api_server.go:131] duration metric: took 8.658497ms to wait for apiserver health ...
	I0911 12:08:39.769457 2255304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:39.940647 2255304 system_pods.go:59] 7 kube-system pods found
	I0911 12:08:39.940683 2255304 system_pods.go:61] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:39.940701 2255304 system_pods.go:61] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:39.940708 2255304 system_pods.go:61] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:39.940715 2255304 system_pods.go:61] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Running
	I0911 12:08:39.940722 2255304 system_pods.go:61] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:39.940729 2255304 system_pods.go:61] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:39.940736 2255304 system_pods.go:61] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:39.940747 2255304 system_pods.go:74] duration metric: took 171.283587ms to wait for pod list to return data ...
	I0911 12:08:39.940763 2255304 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:08:40.139718 2255304 default_sa.go:45] found service account: "default"
	I0911 12:08:40.139751 2255304 default_sa.go:55] duration metric: took 198.981243ms for default service account to be created ...
	I0911 12:08:40.139763 2255304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:08:40.340959 2255304 system_pods.go:86] 7 kube-system pods found
	I0911 12:08:40.340998 2255304 system_pods.go:89] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:40.341008 2255304 system_pods.go:89] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:40.341015 2255304 system_pods.go:89] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:40.341028 2255304 system_pods.go:89] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Running
	I0911 12:08:40.341035 2255304 system_pods.go:89] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:40.341042 2255304 system_pods.go:89] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:40.341051 2255304 system_pods.go:89] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:40.341061 2255304 system_pods.go:126] duration metric: took 201.290886ms to wait for k8s-apps to be running ...
	I0911 12:08:40.341073 2255304 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:08:40.341163 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:40.359994 2255304 system_svc.go:56] duration metric: took 18.903474ms WaitForService to wait for kubelet.
	I0911 12:08:40.360036 2255304 kubeadm.go:581] duration metric: took 10.569498287s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:08:40.360063 2255304 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:40.538713 2255304 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:40.538748 2255304 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:40.538762 2255304 node_conditions.go:105] duration metric: took 178.692637ms to run NodePressure ...
	I0911 12:08:40.538778 2255304 start.go:228] waiting for startup goroutines ...
	I0911 12:08:40.538785 2255304 start.go:233] waiting for cluster config update ...
	I0911 12:08:40.538798 2255304 start.go:242] writing updated cluster config ...
	I0911 12:08:40.539175 2255304 ssh_runner.go:195] Run: rm -f paused
	I0911 12:08:40.601745 2255304 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0911 12:08:40.604230 2255304 out.go:177] 
	W0911 12:08:40.606184 2255304 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0911 12:08:40.607933 2255304 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0911 12:08:40.609870 2255304 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-642215" cluster and "default" namespace by default
	I0911 12:08:38.638441 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:40.639280 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:38.411419 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:38.910721 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.410710 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.911432 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:40.411115 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:40.438764 2255814 api_server.go:72] duration metric: took 2.567766062s to wait for apiserver process to appear ...
	I0911 12:08:40.438803 2255814 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:40.438828 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.439582 2255814 api_server.go:269] stopped: https://192.168.39.230:8444/healthz: Get "https://192.168.39.230:8444/healthz": dial tcp 192.168.39.230:8444: connect: connection refused
	I0911 12:08:40.439644 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.440098 2255814 api_server.go:269] stopped: https://192.168.39.230:8444/healthz: Get "https://192.168.39.230:8444/healthz": dial tcp 192.168.39.230:8444: connect: connection refused
	I0911 12:08:40.940202 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.989968 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.990485 2255048 main.go:141] libmachine: (no-preload-352076) Found IP for machine: 192.168.72.157
	I0911 12:08:40.990519 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has current primary IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.990530 2255048 main.go:141] libmachine: (no-preload-352076) Reserving static IP address...
	I0911 12:08:40.990910 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "no-preload-352076", mac: "52:54:00:91:89:e0", ip: "192.168.72.157"} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:40.990942 2255048 main.go:141] libmachine: (no-preload-352076) Reserved static IP address: 192.168.72.157
	I0911 12:08:40.991004 2255048 main.go:141] libmachine: (no-preload-352076) Waiting for SSH to be available...
	I0911 12:08:40.991024 2255048 main.go:141] libmachine: (no-preload-352076) DBG | skip adding static IP to network mk-no-preload-352076 - found existing host DHCP lease matching {name: "no-preload-352076", mac: "52:54:00:91:89:e0", ip: "192.168.72.157"}
	I0911 12:08:40.991044 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Getting to WaitForSSH function...
	I0911 12:08:40.994061 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.994417 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:40.994478 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.994612 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Using SSH client type: external
	I0911 12:08:40.994653 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa (-rw-------)
	I0911 12:08:40.994693 2255048 main.go:141] libmachine: (no-preload-352076) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:08:40.994711 2255048 main.go:141] libmachine: (no-preload-352076) DBG | About to run SSH command:
	I0911 12:08:40.994725 2255048 main.go:141] libmachine: (no-preload-352076) DBG | exit 0
	I0911 12:08:41.093865 2255048 main.go:141] libmachine: (no-preload-352076) DBG | SSH cmd err, output: <nil>: 
	I0911 12:08:41.094360 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetConfigRaw
	I0911 12:08:41.095142 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:41.098534 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.098944 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.098985 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.099319 2255048 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/config.json ...
	I0911 12:08:41.099667 2255048 machine.go:88] provisioning docker machine ...
	I0911 12:08:41.099711 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:41.100079 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.100503 2255048 buildroot.go:166] provisioning hostname "no-preload-352076"
	I0911 12:08:41.100535 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.100868 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.104253 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.104920 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.105102 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.105420 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.105864 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.106201 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.106627 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.106937 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.107432 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.107447 2255048 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-352076 && echo "no-preload-352076" | sudo tee /etc/hostname
	I0911 12:08:41.249885 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-352076
	
	I0911 12:08:41.249919 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.253419 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.253854 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.253892 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.254125 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.254373 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.254576 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.254752 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.254945 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.255592 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.255624 2255048 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-352076' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-352076/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-352076' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:08:41.394308 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:08:41.394348 2255048 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:08:41.394378 2255048 buildroot.go:174] setting up certificates
	I0911 12:08:41.394388 2255048 provision.go:83] configureAuth start
	I0911 12:08:41.394401 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.394737 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:41.398042 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.398506 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.398540 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.398747 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.401368 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.401743 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.401797 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.401939 2255048 provision.go:138] copyHostCerts
	I0911 12:08:41.402020 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:08:41.402034 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:08:41.402102 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:08:41.402226 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:08:41.402238 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:08:41.402278 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:08:41.402374 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:08:41.402386 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:08:41.402413 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:08:41.402501 2255048 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.no-preload-352076 san=[192.168.72.157 192.168.72.157 localhost 127.0.0.1 minikube no-preload-352076]
	I0911 12:08:41.717751 2255048 provision.go:172] copyRemoteCerts
	I0911 12:08:41.717828 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:08:41.717865 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.721152 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.721457 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.721499 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.721720 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.721943 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.722111 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.722261 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:41.818932 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 12:08:41.846852 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:08:41.875977 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0911 12:08:41.905364 2255048 provision.go:86] duration metric: configureAuth took 510.946609ms
	I0911 12:08:41.905401 2255048 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:08:41.905662 2255048 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:41.905762 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.909182 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.909656 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.909725 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.909913 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.910149 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.910342 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.910487 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.910649 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.911134 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.911154 2255048 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:08:42.260214 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:08:42.260254 2255048 machine.go:91] provisioned docker machine in 1.16057097s
	I0911 12:08:42.260268 2255048 start.go:300] post-start starting for "no-preload-352076" (driver="kvm2")
	I0911 12:08:42.260283 2255048 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:08:42.260307 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.260700 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:08:42.260738 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.263782 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.264157 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.264197 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.264358 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.264595 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.264808 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.265010 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.356470 2255048 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:08:42.361886 2255048 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:08:42.361921 2255048 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:08:42.362004 2255048 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:08:42.362082 2255048 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:08:42.362196 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:08:42.372005 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:42.400800 2255048 start.go:303] post-start completed in 140.51468ms
	I0911 12:08:42.400850 2255048 fix.go:56] fixHost completed within 24.064734762s
	I0911 12:08:42.400882 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.404351 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.404799 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.404862 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.405055 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.405297 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.405484 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.405644 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.405859 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:42.406477 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:42.406505 2255048 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:08:42.529978 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434122.467205529
	
	I0911 12:08:42.530008 2255048 fix.go:206] guest clock: 1694434122.467205529
	I0911 12:08:42.530020 2255048 fix.go:219] Guest: 2023-09-11 12:08:42.467205529 +0000 UTC Remote: 2023-09-11 12:08:42.400855668 +0000 UTC m=+369.043734805 (delta=66.349861ms)
	I0911 12:08:42.530049 2255048 fix.go:190] guest clock delta is within tolerance: 66.349861ms
	I0911 12:08:42.530062 2255048 start.go:83] releasing machines lock for "no-preload-352076", held for 24.19398788s
	I0911 12:08:42.530094 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.530397 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:42.533425 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.533777 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.533809 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.534032 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534670 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534881 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534986 2255048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:08:42.535048 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.535193 2255048 ssh_runner.go:195] Run: cat /version.json
	I0911 12:08:42.535235 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.538009 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538210 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538356 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.538386 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538551 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.538630 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.538658 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538748 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.538862 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.538939 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.539033 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.539212 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.539226 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.539373 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.659315 2255048 ssh_runner.go:195] Run: systemctl --version
	I0911 12:08:42.666117 2255048 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:08:42.827592 2255048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:08:42.834283 2255048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:08:42.834379 2255048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:08:42.855077 2255048 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:08:42.855107 2255048 start.go:466] detecting cgroup driver to use...
	I0911 12:08:42.855187 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:08:42.871553 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:08:42.886253 2255048 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:08:42.886341 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:08:42.902211 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:08:42.917991 2255048 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:08:43.043679 2255048 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:08:43.182633 2255048 docker.go:212] disabling docker service ...
	I0911 12:08:43.182709 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:08:43.200269 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:08:43.216232 2255048 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:08:43.338376 2255048 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:08:43.460730 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:08:43.478083 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:08:43.499948 2255048 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:08:43.500018 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.513007 2255048 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:08:43.513098 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.526435 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.539976 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.553967 2255048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:08:43.568765 2255048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:08:43.580392 2255048 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:08:43.580481 2255048 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:08:43.599784 2255048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:08:43.612160 2255048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:08:43.725608 2255048 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:08:43.930261 2255048 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:08:43.930390 2255048 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:08:43.937749 2255048 start.go:534] Will wait 60s for crictl version
	I0911 12:08:43.937875 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:43.942818 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:08:43.986093 2255048 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:08:43.986210 2255048 ssh_runner.go:195] Run: crio --version
	I0911 12:08:44.042887 2255048 ssh_runner.go:195] Run: crio --version
	I0911 12:08:44.106673 2255048 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:08:45.592797 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:45.592855 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:45.592874 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:45.637810 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:45.637846 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:45.940997 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:45.947826 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:08:45.947867 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:08:46.440462 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:46.449732 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:08:46.449772 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:08:46.940777 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:46.946988 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 200:
	ok
	I0911 12:08:46.957787 2255814 api_server.go:141] control plane version: v1.28.1
	I0911 12:08:46.957832 2255814 api_server.go:131] duration metric: took 6.519019358s to wait for apiserver health ...
	I0911 12:08:46.957845 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:08:46.957854 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:46.960358 2255814 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:08:43.138628 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:45.640990 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:46.962120 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:08:46.987804 2255814 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:08:47.021845 2255814 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:47.042508 2255814 system_pods.go:59] 8 kube-system pods found
	I0911 12:08:47.042560 2255814 system_pods.go:61] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:08:47.042575 2255814 system_pods.go:61] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:08:47.042585 2255814 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:08:47.042600 2255814 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:08:47.042612 2255814 system_pods.go:61] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:08:47.042624 2255814 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:08:47.042641 2255814 system_pods.go:61] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:08:47.042652 2255814 system_pods.go:61] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:08:47.042663 2255814 system_pods.go:74] duration metric: took 20.787272ms to wait for pod list to return data ...
	I0911 12:08:47.042677 2255814 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:47.048412 2255814 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:47.048524 2255814 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:47.048547 2255814 node_conditions.go:105] duration metric: took 5.861231ms to run NodePressure ...
	I0911 12:08:47.048595 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:47.550933 2255814 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:08:47.556511 2255814 kubeadm.go:787] kubelet initialised
	I0911 12:08:47.556543 2255814 kubeadm.go:788] duration metric: took 5.579487ms waiting for restarted kubelet to initialise ...
	I0911 12:08:47.556554 2255814 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:47.563694 2255814 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.569943 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.569975 2255814 pod_ready.go:81] duration metric: took 6.244443ms waiting for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.569986 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.570001 2255814 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.576703 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.576777 2255814 pod_ready.go:81] duration metric: took 6.7656ms waiting for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.576791 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.576805 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.587740 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.587788 2255814 pod_ready.go:81] duration metric: took 10.95451ms waiting for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.587813 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.587833 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.596430 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.596468 2255814 pod_ready.go:81] duration metric: took 8.617854ms waiting for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.596481 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.596492 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.956009 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-proxy-ldgjr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.956047 2255814 pod_ready.go:81] duration metric: took 359.546333ms waiting for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.956060 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-proxy-ldgjr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.956078 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:44.108577 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:44.112208 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:44.112736 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:44.112782 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:44.113074 2255048 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0911 12:08:44.119517 2255048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:44.140305 2255048 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:08:44.140398 2255048 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:44.184487 2255048 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:08:44.184529 2255048 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 12:08:44.184600 2255048 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.184910 2255048 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.185117 2255048 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.185240 2255048 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.185366 2255048 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.185790 2255048 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.185987 2255048 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0911 12:08:44.186471 2255048 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.186856 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.186943 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.187105 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.187306 2255048 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.187523 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.187570 2255048 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0911 12:08:44.188031 2255048 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.188698 2255048 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.350727 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.351429 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.353625 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.356576 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.374129 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0911 12:08:44.385524 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.410764 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.472311 2255048 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0911 12:08:44.472382 2255048 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.472453 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.572121 2255048 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0911 12:08:44.572186 2255048 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.572258 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.589426 2255048 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0911 12:08:44.589558 2255048 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.589492 2255048 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0911 12:08:44.589638 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.589643 2255048 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.589692 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691568 2255048 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0911 12:08:44.691627 2255048 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.691657 2255048 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0911 12:08:44.691734 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.691767 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.691749 2255048 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.691816 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691705 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691943 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.691955 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.729362 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.778025 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0911 12:08:44.778152 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0911 12:08:44.778215 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:44.778280 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.799788 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0911 12:08:44.799952 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:08:44.799997 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.800112 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.800183 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0911 12:08:44.800283 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:44.851138 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0911 12:08:44.851174 2255048 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.851192 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0911 12:08:44.851227 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0911 12:08:44.851239 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.851141 2255048 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0911 12:08:44.851363 2255048 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.851430 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.896214 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0911 12:08:44.896261 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0911 12:08:44.896310 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0911 12:08:44.896376 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:44.896377 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:08:46.231671 2255048 ssh_runner.go:235] Completed: which crictl: (1.380174214s)
	I0911 12:08:46.231732 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1: (1.33531707s)
	I0911 12:08:46.231734 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.38044194s)
	I0911 12:08:46.231760 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0911 12:08:46.231767 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0911 12:08:46.231780 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:46.231781 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:46.231821 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:46.231777 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1: (1.335378451s)
	I0911 12:08:46.231904 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0911 12:08:48.356501 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.356547 2255814 pod_ready.go:81] duration metric: took 400.453753ms waiting for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:48.356563 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.356575 2255814 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:48.756718 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.756761 2255814 pod_ready.go:81] duration metric: took 400.17438ms waiting for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:48.756775 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.756786 2255814 pod_ready.go:38] duration metric: took 1.200219314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:48.756806 2255814 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:08:48.775561 2255814 ops.go:34] apiserver oom_adj: -16
	I0911 12:08:48.775587 2255814 kubeadm.go:640] restartCluster took 22.189536767s
	I0911 12:08:48.775598 2255814 kubeadm.go:406] StartCluster complete in 22.23955062s
	I0911 12:08:48.775621 2255814 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:48.775713 2255814 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:08:48.778091 2255814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:48.778397 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:08:48.778424 2255814 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:08:48.778566 2255814 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.778597 2255814 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-484027"
	W0911 12:08:48.778614 2255814 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:08:48.778648 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:48.778696 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.778718 2255814 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.778734 2255814 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-484027"
	I0911 12:08:48.779141 2255814 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.779145 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779159 2255814 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-484027"
	W0911 12:08:48.779167 2255814 addons.go:240] addon metrics-server should already be in state true
	I0911 12:08:48.779173 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.779207 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.779289 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779343 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.779509 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779556 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.786929 2255814 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-484027" context rescaled to 1 replicas
	I0911 12:08:48.786996 2255814 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:08:48.789204 2255814 out.go:177] * Verifying Kubernetes components...
	I0911 12:08:48.790973 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:48.799774 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I0911 12:08:48.800366 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.800462 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0911 12:08:48.801065 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.801286 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.801312 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.802064 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.802091 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.802105 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0911 12:08:48.802166 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.802495 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.802706 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.802842 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.802873 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.802873 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.803804 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.803827 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.804437 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.805108 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.805156 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.823113 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0911 12:08:48.823705 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.824347 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.824378 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.824848 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.825073 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.827337 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.827355 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43649
	I0911 12:08:48.830403 2255814 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:48.827726 2255814 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-484027"
	I0911 12:08:48.828116 2255814 main.go:141] libmachine: () Calling .GetVersion
	W0911 12:08:48.832240 2255814 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:08:48.832297 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.832351 2255814 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:48.832372 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:08:48.832404 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.832767 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.832846 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.833819 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.833843 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.834348 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.834583 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.836499 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.837953 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.838586 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.838616 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.838662 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.838863 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.839009 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.839383 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.848085 2255814 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:08:48.850041 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:08:48.850077 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:08:48.850117 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.853766 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.854286 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.854324 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.854695 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.855024 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.855222 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.855427 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.857253 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
	I0911 12:08:48.858013 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.858572 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.858593 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.858922 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.859424 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.859461 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.877066 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0911 12:08:48.877762 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.878430 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.878451 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.878986 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.879214 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.881452 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.881771 2255814 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:48.881790 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:08:48.881810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.885826 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.886380 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.886406 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.887000 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.887261 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.887456 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.887604 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.990643 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:49.087344 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:08:49.087379 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:08:49.088448 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:49.172284 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:08:49.172325 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:08:49.284010 2255814 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-484027" to be "Ready" ...
	I0911 12:08:49.284396 2255814 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 12:08:49.296054 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:49.296086 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:08:49.379706 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:51.018731 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.028036666s)
	I0911 12:08:51.018796 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.018810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.018733 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.930229373s)
	I0911 12:08:51.018900 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.018920 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.019201 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.019252 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.019291 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.019304 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.019315 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.019325 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.019420 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.019433 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.019445 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.019457 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.021142 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.021184 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.021199 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021204 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021223 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021223 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021238 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.021259 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.021542 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021615 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021683 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.122492 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.742646501s)
	I0911 12:08:51.122563 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.122582 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.123183 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.123183 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.123214 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.123224 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.123232 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.123668 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.123713 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.123729 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.123743 2255814 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-484027"
	I0911 12:08:51.126333 2255814 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0911 12:08:48.273682 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:50.640588 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:51.128042 2255814 addons.go:502] enable addons completed in 2.34962006s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0911 12:08:51.299348 2255814 node_ready.go:58] node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:49.857883 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (3.62602487s)
	I0911 12:08:49.857920 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0911 12:08:49.857935 2255048 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:49.858008 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:49.858007 2255048 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.626200516s)
	I0911 12:08:49.858128 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0911 12:08:49.858215 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:08:53.140732 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:55.639106 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:53.799851 2255814 node_ready.go:58] node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:56.661585 2255814 node_ready.go:49] node "default-k8s-diff-port-484027" has status "Ready":"True"
	I0911 12:08:56.661621 2255814 node_ready.go:38] duration metric: took 7.377564832s waiting for node "default-k8s-diff-port-484027" to be "Ready" ...
	I0911 12:08:56.661651 2255814 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:56.675600 2255814 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.686880 2255814 pod_ready.go:92] pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:56.686977 2255814 pod_ready.go:81] duration metric: took 11.34453ms waiting for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.687027 2255814 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.695897 2255814 pod_ready.go:92] pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:56.695991 2255814 pod_ready.go:81] duration metric: took 8.931143ms waiting for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.696011 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:57.305638 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (7.447392742s)
	I0911 12:08:57.305689 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0911 12:08:57.305809 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.447768556s)
	I0911 12:08:57.305836 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0911 12:08:57.305855 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:57.305932 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:58.142333 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:00.644281 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:58.721936 2255814 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.721964 2255814 pod_ready.go:81] duration metric: took 2.025944093s waiting for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.721978 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.728483 2255814 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.728509 2255814 pod_ready.go:81] duration metric: took 6.525188ms waiting for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.728522 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.868777 2255814 pod_ready.go:92] pod "kube-proxy-ldgjr" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.868821 2255814 pod_ready.go:81] duration metric: took 140.280926ms waiting for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.868839 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:59.266668 2255814 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:59.266699 2255814 pod_ready.go:81] duration metric: took 397.852252ms waiting for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:59.266710 2255814 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:01.578711 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:00.172738 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.866760661s)
	I0911 12:09:00.172779 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0911 12:09:00.172904 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:09:00.172989 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:09:01.745988 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.572965994s)
	I0911 12:09:01.746029 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0911 12:09:01.746047 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:09:01.746105 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:09:03.140327 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:05.141268 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:04.080056 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:06.578690 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:03.814358 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (2.068208039s)
	I0911 12:09:03.814432 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0911 12:09:03.814452 2255048 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:09:03.814516 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:09:04.982461 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.167909383s)
	I0911 12:09:04.982505 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0911 12:09:04.982542 2255048 cache_images.go:123] Successfully loaded all cached images
	I0911 12:09:04.982549 2255048 cache_images.go:92] LoadImages completed in 20.798002598s
	I0911 12:09:04.982647 2255048 ssh_runner.go:195] Run: crio config
	I0911 12:09:05.047992 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:09:05.048024 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:09:05.048049 2255048 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:09:05.048070 2255048 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.157 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-352076 NodeName:no-preload-352076 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:09:05.048268 2255048 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-352076"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.157
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.157"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:09:05.048352 2255048 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-352076 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-352076 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:09:05.048427 2255048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:09:05.060720 2255048 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:09:05.060881 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:09:05.072228 2255048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0911 12:09:05.093943 2255048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:09:05.113383 2255048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0911 12:09:05.136859 2255048 ssh_runner.go:195] Run: grep 192.168.72.157	control-plane.minikube.internal$ /etc/hosts
	I0911 12:09:05.143807 2255048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.157	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:09:05.160629 2255048 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076 for IP: 192.168.72.157
	I0911 12:09:05.160686 2255048 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:09:05.161057 2255048 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:09:05.161131 2255048 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:09:05.161253 2255048 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.key
	I0911 12:09:05.161367 2255048 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.key.66fc92c5
	I0911 12:09:05.161447 2255048 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.key
	I0911 12:09:05.161605 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:09:05.161646 2255048 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:09:05.161655 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:09:05.161696 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:09:05.161745 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:09:05.161773 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:09:05.161838 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:09:05.162864 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:09:05.196273 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 12:09:05.226310 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:09:05.259094 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:09:05.296329 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:09:05.329537 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:09:05.363893 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:09:05.398183 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:09:05.431986 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:09:05.462584 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:09:05.494047 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:09:05.531243 2255048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:09:05.554858 2255048 ssh_runner.go:195] Run: openssl version
	I0911 12:09:05.564158 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:09:05.578611 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.585480 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.585563 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.592835 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:09:05.606413 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:09:05.618978 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.626101 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.626179 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.634526 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:09:05.648394 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:09:05.664598 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.671632 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.671734 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.679143 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:09:05.691797 2255048 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:09:05.698734 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:09:05.706797 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:09:05.713927 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:09:05.721394 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:09:05.728652 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:09:05.736364 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:09:05.744505 2255048 kubeadm.go:404] StartCluster: {Name:no-preload-352076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-352076 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:09:05.744673 2255048 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:09:05.744751 2255048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:09:05.783568 2255048 cri.go:89] found id: ""
	I0911 12:09:05.783665 2255048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:09:05.794403 2255048 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:09:05.794443 2255048 kubeadm.go:636] restartCluster start
	I0911 12:09:05.794532 2255048 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:09:05.808458 2255048 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:05.809808 2255048 kubeconfig.go:92] found "no-preload-352076" server: "https://192.168.72.157:8443"
	I0911 12:09:05.812541 2255048 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:09:05.824406 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:05.824488 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:05.838004 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:05.838029 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:05.838081 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:05.850725 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:06.351553 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:06.351683 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:06.365583 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:06.851068 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:06.851203 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:06.865829 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.351654 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:07.351840 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:07.365462 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.851109 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:07.851227 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:07.865132 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:08.351854 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:08.351980 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:08.364980 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.637342 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:09.637899 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:11.638591 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:09.078188 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:11.575790 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:08.850933 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:08.851079 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:08.865313 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:09.350825 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:09.350918 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:09.363633 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:09.850908 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:09.851009 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:09.864051 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:10.351371 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:10.351459 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:10.364187 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:10.851868 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:10.851993 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:10.865706 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:11.351327 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:11.351445 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:11.364860 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:11.851490 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:11.851579 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:11.865090 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:12.351698 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:12.351841 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:12.365554 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:12.851082 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:12.851189 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:12.863359 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:13.351652 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:13.351762 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:13.364220 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:13.638913 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:16.138385 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:14.075701 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:16.083424 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:13.851558 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:13.851650 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:13.864548 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:14.351104 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:14.351196 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:14.363567 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:14.851181 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:14.851287 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:14.865371 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:15.351813 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:15.351921 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:15.364728 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:15.825491 2255048 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:09:15.825532 2255048 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:09:15.825549 2255048 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:09:15.825628 2255048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:09:15.863098 2255048 cri.go:89] found id: ""
	I0911 12:09:15.863207 2255048 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:09:15.881673 2255048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:09:15.892264 2255048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:09:15.892363 2255048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:09:15.903142 2255048 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:09:15.903168 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:16.075542 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.073042 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.305269 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.399770 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.484630 2255048 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:09:17.484713 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:17.502746 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:18.017919 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:18.139562 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:20.643130 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:18.578074 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:21.077490 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:18.517850 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:19.018007 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:19.518125 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:20.018229 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:20.062967 2255048 api_server.go:72] duration metric: took 2.578334133s to wait for apiserver process to appear ...
	I0911 12:09:20.062999 2255048 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:09:20.063024 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:20.063765 2255048 api_server.go:269] stopped: https://192.168.72.157:8443/healthz: Get "https://192.168.72.157:8443/healthz": dial tcp 192.168.72.157:8443: connect: connection refused
	I0911 12:09:20.063812 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:20.064348 2255048 api_server.go:269] stopped: https://192.168.72.157:8443/healthz: Get "https://192.168.72.157:8443/healthz": dial tcp 192.168.72.157:8443: connect: connection refused
	I0911 12:09:20.564847 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.276251 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:09:24.276297 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:09:24.276314 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.320049 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:09:24.320081 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:09:24.564444 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.570484 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:09:24.570524 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:09:25.064830 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:25.071229 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:09:25.071269 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:09:25.564901 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:25.570887 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
	ok
	I0911 12:09:25.580713 2255048 api_server.go:141] control plane version: v1.28.1
	I0911 12:09:25.580746 2255048 api_server.go:131] duration metric: took 5.517738896s to wait for apiserver health ...
	I0911 12:09:25.580759 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:09:25.580768 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:09:25.583425 2255048 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:09:23.139085 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.140565 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:23.077522 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.576471 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.585300 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:09:25.610742 2255048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:09:25.660757 2255048 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:09:25.680043 2255048 system_pods.go:59] 8 kube-system pods found
	I0911 12:09:25.680087 2255048 system_pods.go:61] "coredns-5dd5756b68-mghg7" [380c0d4e-d7e3-4434-9757-f4debc5206d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:09:25.680104 2255048 system_pods.go:61] "etcd-no-preload-352076" [4f74cb61-25fb-4478-afd4-3b0f0ef1bdae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:09:25.680115 2255048 system_pods.go:61] "kube-apiserver-no-preload-352076" [09ed0349-f0dc-4ab0-b057-230daeb8e7c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:09:25.680127 2255048 system_pods.go:61] "kube-controller-manager-no-preload-352076" [c93ec6ac-408b-4859-b45b-79bb3e3b53d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:09:25.680142 2255048 system_pods.go:61] "kube-proxy-f748l" [8379d15e-e886-48cb-8a53-3a48aef7c9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:09:25.680157 2255048 system_pods.go:61] "kube-scheduler-no-preload-352076" [7e7068d1-7f6b-4fe7-b1f4-73ddab4c7db4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:09:25.680174 2255048 system_pods.go:61] "metrics-server-57f55c9bc5-tvrkk" [7b463025-d2f8-4f1d-aa69-740cd828c1d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:09:25.680188 2255048 system_pods.go:61] "storage-provisioner" [52928c2e-1383-41b0-817c-203d016da7df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:09:25.680201 2255048 system_pods.go:74] duration metric: took 19.417405ms to wait for pod list to return data ...
	I0911 12:09:25.680220 2255048 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:09:25.685088 2255048 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:09:25.685127 2255048 node_conditions.go:123] node cpu capacity is 2
	I0911 12:09:25.685144 2255048 node_conditions.go:105] duration metric: took 4.914847ms to run NodePressure ...
	I0911 12:09:25.685170 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:26.127026 2255048 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:09:26.137211 2255048 kubeadm.go:787] kubelet initialised
	I0911 12:09:26.137247 2255048 kubeadm.go:788] duration metric: took 10.126758ms waiting for restarted kubelet to initialise ...
	I0911 12:09:26.137258 2255048 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:09:26.144732 2255048 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:28.168555 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:27.637951 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.142107 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:32.144784 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:28.078707 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.575535 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:32.575917 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.169198 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:31.168599 2255048 pod_ready.go:92] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:31.168623 2255048 pod_ready.go:81] duration metric: took 5.02386193s waiting for pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:31.168633 2255048 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:32.194954 2255048 pod_ready.go:92] pod "etcd-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:32.194986 2255048 pod_ready.go:81] duration metric: took 1.026346965s waiting for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:32.194997 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:33.218527 2255048 pod_ready.go:92] pod "kube-apiserver-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:33.218555 2255048 pod_ready.go:81] duration metric: took 1.02355184s waiting for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:33.218568 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:34.637330 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:36.638472 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:34.577030 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:37.076594 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:35.576857 2255048 pod_ready.go:102] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:38.072765 2255048 pod_ready.go:92] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.072791 2255048 pod_ready.go:81] duration metric: took 4.854217828s waiting for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.072807 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f748l" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.080177 2255048 pod_ready.go:92] pod "kube-proxy-f748l" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.080219 2255048 pod_ready.go:81] duration metric: took 7.386736ms waiting for pod "kube-proxy-f748l" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.080234 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.086910 2255048 pod_ready.go:92] pod "kube-scheduler-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.086935 2255048 pod_ready.go:81] duration metric: took 6.692353ms waiting for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.086947 2255048 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:39.139899 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:41.638556 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:39.076977 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:41.077356 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:40.275588 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:42.279343 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:44.140467 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.638950 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:43.575930 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.075946 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:44.773655 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.773783 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.639947 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:51.136953 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.076228 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:50.076280 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:52.575191 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.781871 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:51.276719 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:53.137841 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:55.639201 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:54.575724 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:56.577539 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:53.774303 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:55.775398 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:57.776172 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:58.137820 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:00.140032 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:59.075343 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:01.077352 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:00.274288 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:02.281024 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:02.637659 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:04.638359 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:07.138194 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:03.576039 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:05.581746 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:04.774609 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:06.777649 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:09.638158 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:12.138452 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:08.086089 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:10.577034 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:09.274229 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:11.773772 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:14.637905 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:17.137141 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:13.075497 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:15.075928 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:17.077025 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:13.777087 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:16.273244 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:18.274393 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:19.138225 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:21.638206 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:19.574944 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:21.577126 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:20.274987 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:22.774026 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:23.638427 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:25.639796 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:24.077660 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:26.576065 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:25.274996 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:27.773877 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:28.143807 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:30.639138 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:28.576550 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:31.076503 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:29.775191 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:32.275040 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:33.137429 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:35.137961 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:37.141067 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:33.575704 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:35.576704 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:34.773882 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:36.774534 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:39.637647 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:41.639902 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:38.076297 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:40.577008 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:38.774671 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:41.274312 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:43.274935 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:44.137187 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:46.141314 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:43.079758 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:45.589530 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:45.774930 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.273321 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.638868 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:51.139417 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.076212 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:50.078989 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:52.575259 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:50.274454 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:52.275086 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:53.637980 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:55.638403 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:54.575452 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:56.575714 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:54.777442 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:57.273658 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:58.136668 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:00.137799 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:59.077541 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:01.576462 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:59.275476 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:01.773680 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:02.636537 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:04.637865 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:07.136712 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:04.078863 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:06.577886 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:03.776995 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:06.274574 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:08.275266 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:09.137886 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:11.147508 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:09.075793 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:11.575828 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:10.275357 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:12.775241 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:13.638603 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:16.137986 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:14.076435 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:16.078427 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:15.275325 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:17.275446 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:18.138511 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:20.638477 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:18.575789 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:20.575987 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:22.576545 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:19.774865 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:22.280364 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:23.138801 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:25.638249 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:24.577693 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:26.581497 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:24.774606 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:27.274878 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:27.639126 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.640834 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:32.138497 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.079788 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:31.575364 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.774769 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:31.777925 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:34.636906 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:36.640855 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:33.576041 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:35.577513 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:34.275601 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:36.282120 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:39.138445 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:41.638724 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:38.074500 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:40.077237 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:42.078135 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:38.774882 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:40.776485 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:43.277653 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:43.639224 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:46.137265 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:44.574433 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:46.576378 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:45.776572 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.275210 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.137470 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:50.638249 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.580531 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:51.076018 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:50.775117 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:52.775535 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:52.641468 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.138561 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.138875 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:53.078788 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.079529 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.577003 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.274582 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.774611 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:59.637786 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:01.644407 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:00.075246 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:02.078022 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:00.274022 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:02.275711 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.137692 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.614957 2255187 pod_ready.go:81] duration metric: took 4m0.000726123s waiting for pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace to be "Ready" ...
	E0911 12:12:04.614999 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:12:04.615020 2255187 pod_ready.go:38] duration metric: took 4m6.604014313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:12:04.615056 2255187 kubeadm.go:640] restartCluster took 4m25.597873734s
	W0911 12:12:04.615156 2255187 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0911 12:12:04.615268 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0911 12:12:04.576764 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:06.579533 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.779450 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:07.276202 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:08.580439 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:11.075465 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:09.277634 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:11.776920 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:13.076473 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:15.077335 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:17.574470 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:14.276806 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:16.774759 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:19.576080 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:22.078686 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:18.775173 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:21.274723 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:23.276576 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:24.082590 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:26.584485 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:25.277284 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:27.774953 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:29.079400 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:31.575879 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:30.278194 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:32.773872 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:37.434471 2255187 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.819147659s)
	I0911 12:12:37.434634 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:12:37.450370 2255187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:12:37.463019 2255187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:12:37.473313 2255187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:12:37.473375 2255187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:12:33.578208 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:36.076227 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:34.775135 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:36.775239 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:37.703004 2255187 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:12:38.574884 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:40.577027 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:38.779298 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:41.274039 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:43.076990 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:45.077566 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:47.576057 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:43.775208 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:45.775382 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:48.274401 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:49.022486 2255187 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 12:12:49.022566 2255187 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 12:12:49.022667 2255187 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 12:12:49.022825 2255187 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 12:12:49.022994 2255187 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 12:12:49.023081 2255187 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 12:12:49.025047 2255187 out.go:204]   - Generating certificates and keys ...
	I0911 12:12:49.025151 2255187 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 12:12:49.025249 2255187 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 12:12:49.025340 2255187 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0911 12:12:49.025428 2255187 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0911 12:12:49.025521 2255187 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0911 12:12:49.025599 2255187 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0911 12:12:49.025703 2255187 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0911 12:12:49.025801 2255187 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0911 12:12:49.025898 2255187 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0911 12:12:49.026021 2255187 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0911 12:12:49.026083 2255187 kubeadm.go:322] [certs] Using the existing "sa" key
	I0911 12:12:49.026163 2255187 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 12:12:49.026252 2255187 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 12:12:49.026338 2255187 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 12:12:49.026436 2255187 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 12:12:49.026518 2255187 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 12:12:49.026609 2255187 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 12:12:49.026694 2255187 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 12:12:49.028378 2255187 out.go:204]   - Booting up control plane ...
	I0911 12:12:49.028469 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 12:12:49.028538 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 12:12:49.028632 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 12:12:49.028759 2255187 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 12:12:49.028894 2255187 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 12:12:49.028960 2255187 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 12:12:49.029126 2255187 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 12:12:49.029225 2255187 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504895 seconds
	I0911 12:12:49.029346 2255187 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:12:49.029485 2255187 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:12:49.029568 2255187 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:12:49.029801 2255187 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-235462 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:12:49.029864 2255187 kubeadm.go:322] [bootstrap-token] Using token: u1pjdn.ynd5x30gs2d5ngse
	I0911 12:12:49.031514 2255187 out.go:204]   - Configuring RBAC rules ...
	I0911 12:12:49.031635 2255187 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:12:49.031766 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:12:49.031961 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:12:49.032100 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:12:49.032234 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:12:49.032370 2255187 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:12:49.032513 2255187 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:12:49.032569 2255187 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:12:49.032641 2255187 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:12:49.032653 2255187 kubeadm.go:322] 
	I0911 12:12:49.032721 2255187 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:12:49.032733 2255187 kubeadm.go:322] 
	I0911 12:12:49.032850 2255187 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:12:49.032862 2255187 kubeadm.go:322] 
	I0911 12:12:49.032897 2255187 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:12:49.032954 2255187 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:12:49.033027 2255187 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:12:49.033034 2255187 kubeadm.go:322] 
	I0911 12:12:49.033113 2255187 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:12:49.033125 2255187 kubeadm.go:322] 
	I0911 12:12:49.033185 2255187 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:12:49.033194 2255187 kubeadm.go:322] 
	I0911 12:12:49.033272 2255187 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:12:49.033364 2255187 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:12:49.033478 2255187 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:12:49.033488 2255187 kubeadm.go:322] 
	I0911 12:12:49.033592 2255187 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:12:49.033674 2255187 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:12:49.033681 2255187 kubeadm.go:322] 
	I0911 12:12:49.033793 2255187 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token u1pjdn.ynd5x30gs2d5ngse \
	I0911 12:12:49.033940 2255187 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:12:49.033981 2255187 kubeadm.go:322] 	--control-plane 
	I0911 12:12:49.033994 2255187 kubeadm.go:322] 
	I0911 12:12:49.034117 2255187 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:12:49.034140 2255187 kubeadm.go:322] 
	I0911 12:12:49.034253 2255187 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token u1pjdn.ynd5x30gs2d5ngse \
	I0911 12:12:49.034398 2255187 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:12:49.034424 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:12:49.034438 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:12:49.036358 2255187 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:12:49.037952 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:12:49.078613 2255187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:12:49.171320 2255187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:12:49.171458 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.171492 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=embed-certs-235462 minikube.k8s.io/updated_at=2023_09_11T12_12_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.227806 2255187 ops.go:34] apiserver oom_adj: -16
	I0911 12:12:49.533909 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.637357 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:50.234909 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:50.734249 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:51.234928 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:51.734543 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:52.235022 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.576947 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.075970 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:50.275288 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.775973 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.734323 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:53.234558 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:53.734598 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.235197 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.734524 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:55.234539 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:55.734806 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:56.234833 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:56.734868 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:57.235336 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.574674 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:56.577723 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:54.777705 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:57.274282 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:57.735164 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:58.234340 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:58.734332 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:59.234884 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:59.734265 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:13:00.234310 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:13:00.376532 2255187 kubeadm.go:1081] duration metric: took 11.205145428s to wait for elevateKubeSystemPrivileges.
	I0911 12:13:00.376577 2255187 kubeadm.go:406] StartCluster complete in 5m21.403889838s
	I0911 12:13:00.376632 2255187 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:13:00.376754 2255187 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:13:00.379195 2255187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:13:00.379496 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:13:00.379604 2255187 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:13:00.379714 2255187 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-235462"
	I0911 12:13:00.379735 2255187 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-235462"
	W0911 12:13:00.379744 2255187 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:13:00.379770 2255187 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:13:00.379813 2255187 addons.go:69] Setting default-storageclass=true in profile "embed-certs-235462"
	I0911 12:13:00.379829 2255187 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-235462"
	I0911 12:13:00.379872 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.380021 2255187 addons.go:69] Setting metrics-server=true in profile "embed-certs-235462"
	I0911 12:13:00.380038 2255187 addons.go:231] Setting addon metrics-server=true in "embed-certs-235462"
	W0911 12:13:00.380053 2255187 addons.go:240] addon metrics-server should already be in state true
	I0911 12:13:00.380092 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.380276 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380299 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.380314 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380338 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.380443 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380464 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.400206 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0911 12:13:00.400222 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0911 12:13:00.400384 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35671
	I0911 12:13:00.400955 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.400990 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.400957 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.401597 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.401619 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.401749 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.401769 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.402081 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402237 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.402249 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.402314 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402602 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402785 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.402950 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.402972 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.402986 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.403016 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.424319 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I0911 12:13:00.424352 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I0911 12:13:00.424991 2255187 addons.go:231] Setting addon default-storageclass=true in "embed-certs-235462"
	W0911 12:13:00.425015 2255187 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:13:00.425039 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.425053 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.425387 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.425471 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.425496 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.425891 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.425904 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.426206 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.426222 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.426644 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.426842 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.428151 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.429014 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.431494 2255187 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:13:00.429852 2255187 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-235462" context rescaled to 1 replicas
	I0911 12:13:00.430039 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.433081 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:13:00.433096 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:13:00.433121 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.433184 2255187 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:13:00.438048 2255187 out.go:177] * Verifying Kubernetes components...
	I0911 12:13:00.436324 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.437532 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.438207 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.442076 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:00.442211 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.442240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.443931 2255187 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:13:00.442451 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.445563 2255187 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:13:00.445579 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.445583 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:13:00.445606 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.445674 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.449267 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0911 12:13:00.449534 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.449823 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.450240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.450270 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.450451 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.450818 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.450838 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.450906 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.451120 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.451298 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.452043 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.452652 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.452686 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.470652 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0911 12:13:00.471240 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.471865 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.471888 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.472326 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.472745 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.474485 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.475072 2255187 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:13:00.475093 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:13:00.475123 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.478333 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.478757 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.478788 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.478949 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.479157 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.479301 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.479434 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.601913 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:13:00.601946 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:13:00.629483 2255187 node_ready.go:35] waiting up to 6m0s for node "embed-certs-235462" to be "Ready" ...
	I0911 12:13:00.629938 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 12:13:00.651067 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:13:00.653504 2255187 node_ready.go:49] node "embed-certs-235462" has status "Ready":"True"
	I0911 12:13:00.653549 2255187 node_ready.go:38] duration metric: took 24.023395ms waiting for node "embed-certs-235462" to be "Ready" ...
	I0911 12:13:00.653564 2255187 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:00.663033 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:13:00.663075 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:13:00.668515 2255187 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.709787 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:13:00.751534 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:13:00.751565 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:13:00.782859 2255187 pod_ready.go:92] pod "etcd-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:00.782894 2255187 pod_ready.go:81] duration metric: took 114.332855ms waiting for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.782910 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.823512 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:13:00.891619 2255187 pod_ready.go:92] pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:00.891678 2255187 pod_ready.go:81] duration metric: took 108.758908ms waiting for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.891695 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.001447 2255187 pod_ready.go:92] pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:01.001483 2255187 pod_ready.go:81] duration metric: took 109.778603ms waiting for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.001501 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.164166 2255187 pod_ready.go:92] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:01.164205 2255187 pod_ready.go:81] duration metric: took 162.694687ms waiting for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.164216 2255187 pod_ready.go:38] duration metric: took 510.637428ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:01.164239 2255187 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:13:01.164300 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:12:59.081781 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:59.267524 2255814 pod_ready.go:81] duration metric: took 4m0.000791617s waiting for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	E0911 12:12:59.267566 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:12:59.267580 2255814 pod_ready.go:38] duration metric: took 4m2.605912471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:12:59.267603 2255814 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:12:59.267645 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:12:59.267855 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:12:59.332014 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:12:59.332042 2255814 cri.go:89] found id: ""
	I0911 12:12:59.332053 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:12:59.332135 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.338400 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:12:59.338493 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:12:59.373232 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:12:59.373284 2255814 cri.go:89] found id: ""
	I0911 12:12:59.373296 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:12:59.373371 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.379199 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:12:59.379288 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:12:59.415804 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:12:59.415840 2255814 cri.go:89] found id: ""
	I0911 12:12:59.415852 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:12:59.415940 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.422256 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:12:59.422343 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:12:59.462300 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:12:59.462327 2255814 cri.go:89] found id: ""
	I0911 12:12:59.462336 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:12:59.462392 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.467244 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:12:59.467364 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:12:59.499594 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:12:59.499619 2255814 cri.go:89] found id: ""
	I0911 12:12:59.499627 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:12:59.499697 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.504481 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:12:59.504570 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:12:59.536588 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:12:59.536620 2255814 cri.go:89] found id: ""
	I0911 12:12:59.536631 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:12:59.536701 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.541454 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:12:59.541529 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:12:59.577953 2255814 cri.go:89] found id: ""
	I0911 12:12:59.577990 2255814 logs.go:284] 0 containers: []
	W0911 12:12:59.578001 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:12:59.578010 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:12:59.578084 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:12:59.616256 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:12:59.616283 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:12:59.616288 2255814 cri.go:89] found id: ""
	I0911 12:12:59.616296 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:12:59.616350 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.621818 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.627431 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:12:59.627462 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:12:59.690633 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:12:59.690681 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:12:59.733084 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:12:59.733133 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:12:59.775174 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:12:59.775220 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:12:59.829438 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:12:59.829492 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:12:59.894842 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:12:59.894888 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:12:59.936662 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:12:59.936703 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:12:59.955507 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:12:59.955544 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:00.127082 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:00.127129 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:00.178458 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:00.178501 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:00.226759 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:00.226805 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:00.267586 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:00.267637 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:00.311431 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:00.311465 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:12:59.276905 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:01.775061 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:02.733813 2255187 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.103819607s)
	I0911 12:13:02.733859 2255187 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0911 12:13:03.298110 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.646997747s)
	I0911 12:13:03.298169 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298179 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298209 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.588380755s)
	I0911 12:13:03.298256 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298278 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298545 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298566 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.298577 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298586 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298596 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298611 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.298622 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298834 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298851 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.298891 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298904 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.299077 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.299104 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.299117 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.299127 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.299083 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.299459 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.299474 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.485702 2255187 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.321356388s)
	I0911 12:13:03.485741 2255187 api_server.go:72] duration metric: took 3.052522714s to wait for apiserver process to appear ...
	I0911 12:13:03.485748 2255187 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:13:03.485768 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:13:03.485987 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.66240811s)
	I0911 12:13:03.486070 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.486090 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.486553 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.486621 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.486642 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.486666 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.486683 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.486940 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.486956 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.486968 2255187 addons.go:467] Verifying addon metrics-server=true in "embed-certs-235462"
	I0911 12:13:03.489450 2255187 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0911 12:13:03.491514 2255187 addons.go:502] enable addons completed in 3.11190942s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0911 12:13:03.571696 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 200:
	ok
	I0911 12:13:03.576690 2255187 api_server.go:141] control plane version: v1.28.1
	I0911 12:13:03.576730 2255187 api_server.go:131] duration metric: took 90.974437ms to wait for apiserver health ...
	I0911 12:13:03.576743 2255187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:13:03.592687 2255187 system_pods.go:59] 8 kube-system pods found
	I0911 12:13:03.592734 2255187 system_pods.go:61] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.592745 2255187 system_pods.go:61] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.592753 2255187 system_pods.go:61] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.592761 2255187 system_pods.go:61] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.592769 2255187 system_pods.go:61] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.592778 2255187 system_pods.go:61] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.592787 2255187 system_pods.go:61] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.592802 2255187 system_pods.go:61] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.592839 2255187 system_pods.go:74] duration metric: took 16.087864ms to wait for pod list to return data ...
	I0911 12:13:03.592855 2255187 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:13:03.606427 2255187 default_sa.go:45] found service account: "default"
	I0911 12:13:03.606517 2255187 default_sa.go:55] duration metric: took 13.6536ms for default service account to be created ...
	I0911 12:13:03.606542 2255187 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:13:03.622692 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:03.622752 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.622765 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.622777 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.622786 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.622801 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.622814 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.622980 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.623076 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.623157 2255187 retry.go:31] will retry after 240.25273ms: missing components: kube-dns, kube-proxy
	I0911 12:13:03.874980 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:03.875031 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.875041 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.875048 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.875081 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.875094 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.875104 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.875118 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.875130 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.875163 2255187 retry.go:31] will retry after 285.300702ms: missing components: kube-dns, kube-proxy
	I0911 12:13:04.171503 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:04.171548 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:04.171558 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:04.171566 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:04.171574 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:04.171580 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:04.171587 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:04.171598 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:04.171607 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:04.171632 2255187 retry.go:31] will retry after 386.395514ms: missing components: kube-dns, kube-proxy
	I0911 12:13:04.565931 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:04.565972 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:04.565982 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:04.565991 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:04.565998 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:04.566007 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:04.566015 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:04.566025 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:04.566039 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:04.566062 2255187 retry.go:31] will retry after 526.673ms: missing components: kube-dns, kube-proxy
	I0911 12:13:05.104101 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:05.104230 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:05.104257 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:05.104277 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:05.104294 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:05.104312 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:05.104336 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:05.104353 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:05.104363 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:05.104385 2255187 retry.go:31] will retry after 628.795734ms: missing components: kube-dns, kube-proxy
	I0911 12:13:05.745358 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:05.745392 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Running
	I0911 12:13:05.745400 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:05.745408 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:05.745416 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:05.745421 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Running
	I0911 12:13:05.745427 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:05.745440 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:05.745451 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:05.745463 2255187 system_pods.go:126] duration metric: took 2.138903103s to wait for k8s-apps to be running ...
	I0911 12:13:05.745480 2255187 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:13:05.745540 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:05.762725 2255187 system_svc.go:56] duration metric: took 17.229678ms WaitForService to wait for kubelet.
	I0911 12:13:05.762766 2255187 kubeadm.go:581] duration metric: took 5.329544538s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:13:05.762793 2255187 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:13:05.767056 2255187 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:13:05.767087 2255187 node_conditions.go:123] node cpu capacity is 2
	I0911 12:13:05.767112 2255187 node_conditions.go:105] duration metric: took 4.314286ms to run NodePressure ...
	I0911 12:13:05.767131 2255187 start.go:228] waiting for startup goroutines ...
	I0911 12:13:05.767138 2255187 start.go:233] waiting for cluster config update ...
	I0911 12:13:05.767147 2255187 start.go:242] writing updated cluster config ...
	I0911 12:13:05.767462 2255187 ssh_runner.go:195] Run: rm -f paused
	I0911 12:13:05.823796 2255187 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:13:05.826336 2255187 out.go:177] * Done! kubectl is now configured to use "embed-certs-235462" cluster and "default" namespace by default
	I0911 12:13:03.450576 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:13:03.472433 2255814 api_server.go:72] duration metric: took 4m14.685379298s to wait for apiserver process to appear ...
	I0911 12:13:03.472469 2255814 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:13:03.472520 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:13:03.472614 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:13:03.515433 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:03.515471 2255814 cri.go:89] found id: ""
	I0911 12:13:03.515483 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:13:03.515560 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.521654 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:13:03.521745 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:13:03.569379 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:03.569406 2255814 cri.go:89] found id: ""
	I0911 12:13:03.569416 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:13:03.569481 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.574638 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:13:03.574723 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:13:03.610693 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:03.610722 2255814 cri.go:89] found id: ""
	I0911 12:13:03.610733 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:13:03.610794 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.615774 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:13:03.615894 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:13:03.657087 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:03.657117 2255814 cri.go:89] found id: ""
	I0911 12:13:03.657129 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:13:03.657211 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.662224 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:13:03.662315 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:13:03.698282 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:03.698359 2255814 cri.go:89] found id: ""
	I0911 12:13:03.698381 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:13:03.698466 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.704160 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:13:03.704246 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:13:03.748122 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:03.748152 2255814 cri.go:89] found id: ""
	I0911 12:13:03.748162 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:13:03.748238 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.752657 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:13:03.752742 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:13:03.786815 2255814 cri.go:89] found id: ""
	I0911 12:13:03.786853 2255814 logs.go:284] 0 containers: []
	W0911 12:13:03.786863 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:13:03.786871 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:13:03.786942 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:13:03.824384 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:03.824409 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:03.824414 2255814 cri.go:89] found id: ""
	I0911 12:13:03.824421 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:13:03.824497 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.830317 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.836320 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:03.836355 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:03.887480 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:13:03.887524 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:03.930466 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:13:03.930507 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:03.966522 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:13:03.966563 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:13:04.026111 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:13:04.026168 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:13:04.045422 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:13:04.045468 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:04.185127 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:04.185179 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:04.235047 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:04.235089 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:13:04.856084 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:13:04.856134 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:13:04.903388 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:04.903433 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:04.964861 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:04.964916 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:05.007565 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:13:05.007605 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:05.069630 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:13:05.069676 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:07.608676 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:13:07.615388 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 200:
	ok
	I0911 12:13:07.617076 2255814 api_server.go:141] control plane version: v1.28.1
	I0911 12:13:07.617101 2255814 api_server.go:131] duration metric: took 4.14462443s to wait for apiserver health ...
	I0911 12:13:07.617110 2255814 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:13:07.617138 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:13:07.617196 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:13:07.656726 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:07.656750 2255814 cri.go:89] found id: ""
	I0911 12:13:07.656760 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:13:07.656850 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.661277 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:13:07.661364 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:13:07.697717 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:07.697746 2255814 cri.go:89] found id: ""
	I0911 12:13:07.697754 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:13:07.697842 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.703800 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:13:07.703888 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:13:07.747003 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:07.747033 2255814 cri.go:89] found id: ""
	I0911 12:13:07.747043 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:13:07.747122 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.751932 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:13:07.752007 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:13:07.785348 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:07.785375 2255814 cri.go:89] found id: ""
	I0911 12:13:07.785386 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:13:07.785460 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.790170 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:13:07.790237 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:13:07.827467 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:07.827496 2255814 cri.go:89] found id: ""
	I0911 12:13:07.827510 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:13:07.827583 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.834478 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:13:07.834552 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:13:07.873739 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:07.873766 2255814 cri.go:89] found id: ""
	I0911 12:13:07.873774 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:13:07.873828 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.878424 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:13:07.878528 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:13:07.916665 2255814 cri.go:89] found id: ""
	I0911 12:13:07.916696 2255814 logs.go:284] 0 containers: []
	W0911 12:13:07.916708 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:13:07.916716 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:13:07.916780 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:13:07.950146 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:07.950172 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:07.950177 2255814 cri.go:89] found id: ""
	I0911 12:13:07.950185 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:13:07.950256 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.954996 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.959157 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:07.959189 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:08.027081 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:13:08.027112 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:03.775843 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:06.274500 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:08.079481 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:13:08.079522 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:08.118655 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:13:08.118696 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:13:08.177644 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:13:08.177690 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:13:08.192495 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:13:08.192524 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:08.338344 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:08.338388 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:08.385409 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:08.385454 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:08.420999 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:13:08.421033 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:08.457183 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:13:08.457223 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:13:08.500499 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:08.500531 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:08.550546 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:13:08.550587 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:08.584802 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:08.584854 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:13:11.626627 2255814 system_pods.go:59] 8 kube-system pods found
	I0911 12:13:11.626661 2255814 system_pods.go:61] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running
	I0911 12:13:11.626666 2255814 system_pods.go:61] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running
	I0911 12:13:11.626670 2255814 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running
	I0911 12:13:11.626675 2255814 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running
	I0911 12:13:11.626679 2255814 system_pods.go:61] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running
	I0911 12:13:11.626683 2255814 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running
	I0911 12:13:11.626690 2255814 system_pods.go:61] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:11.626696 2255814 system_pods.go:61] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running
	I0911 12:13:11.626702 2255814 system_pods.go:74] duration metric: took 4.009586477s to wait for pod list to return data ...
	I0911 12:13:11.626710 2255814 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:13:11.630703 2255814 default_sa.go:45] found service account: "default"
	I0911 12:13:11.630735 2255814 default_sa.go:55] duration metric: took 4.019315ms for default service account to be created ...
	I0911 12:13:11.630747 2255814 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:13:11.637643 2255814 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:11.637681 2255814 system_pods.go:89] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running
	I0911 12:13:11.637687 2255814 system_pods.go:89] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running
	I0911 12:13:11.637693 2255814 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running
	I0911 12:13:11.637697 2255814 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running
	I0911 12:13:11.637701 2255814 system_pods.go:89] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running
	I0911 12:13:11.637706 2255814 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running
	I0911 12:13:11.637713 2255814 system_pods.go:89] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:11.637720 2255814 system_pods.go:89] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running
	I0911 12:13:11.637727 2255814 system_pods.go:126] duration metric: took 6.974046ms to wait for k8s-apps to be running ...
	I0911 12:13:11.637734 2255814 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:13:11.637781 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:11.656267 2255814 system_svc.go:56] duration metric: took 18.513073ms WaitForService to wait for kubelet.
	I0911 12:13:11.656313 2255814 kubeadm.go:581] duration metric: took 4m22.869270451s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:13:11.656342 2255814 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:13:11.660206 2255814 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:13:11.660242 2255814 node_conditions.go:123] node cpu capacity is 2
	I0911 12:13:11.660256 2255814 node_conditions.go:105] duration metric: took 3.907675ms to run NodePressure ...
	I0911 12:13:11.660271 2255814 start.go:228] waiting for startup goroutines ...
	I0911 12:13:11.660281 2255814 start.go:233] waiting for cluster config update ...
	I0911 12:13:11.660295 2255814 start.go:242] writing updated cluster config ...
	I0911 12:13:11.660673 2255814 ssh_runner.go:195] Run: rm -f paused
	I0911 12:13:11.716963 2255814 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:13:11.719502 2255814 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-484027" cluster and "default" namespace by default
	I0911 12:13:08.774412 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:10.776103 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:13.273773 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:15.274785 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:17.776143 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:20.274491 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:22.276115 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:24.776008 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:26.776415 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:29.274644 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:31.774477 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:33.774923 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:35.776441 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:37.777677 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:38.087732 2255048 pod_ready.go:81] duration metric: took 4m0.000743055s waiting for pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace to be "Ready" ...
	E0911 12:13:38.087774 2255048 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:13:38.087805 2255048 pod_ready.go:38] duration metric: took 4m11.950533095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:38.087877 2255048 kubeadm.go:640] restartCluster took 4m32.29342443s
	W0911 12:13:38.087958 2255048 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0911 12:13:38.088001 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0911 12:14:10.169576 2255048 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.081486969s)
	I0911 12:14:10.169706 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:10.189300 2255048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:14:10.202385 2255048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:14:10.213749 2255048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:14:10.213816 2255048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:14:10.279484 2255048 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 12:14:10.279634 2255048 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 12:14:10.462302 2255048 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 12:14:10.462488 2255048 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 12:14:10.462634 2255048 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 12:14:10.659475 2255048 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 12:14:10.661923 2255048 out.go:204]   - Generating certificates and keys ...
	I0911 12:14:10.662086 2255048 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 12:14:10.662142 2255048 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 12:14:10.662223 2255048 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0911 12:14:10.662303 2255048 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0911 12:14:10.663973 2255048 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0911 12:14:10.665836 2255048 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0911 12:14:10.667292 2255048 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0911 12:14:10.668584 2255048 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0911 12:14:10.669931 2255048 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0911 12:14:10.670570 2255048 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0911 12:14:10.671008 2255048 kubeadm.go:322] [certs] Using the existing "sa" key
	I0911 12:14:10.671087 2255048 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 12:14:10.865541 2255048 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 12:14:11.063586 2255048 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 12:14:11.341833 2255048 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 12:14:11.573561 2255048 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 12:14:11.574128 2255048 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 12:14:11.577101 2255048 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 12:14:11.579311 2255048 out.go:204]   - Booting up control plane ...
	I0911 12:14:11.579427 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 12:14:11.579550 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 12:14:11.579644 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 12:14:11.598440 2255048 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 12:14:11.599446 2255048 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 12:14:11.599531 2255048 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 12:14:11.738771 2255048 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 12:14:21.243059 2255048 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503809 seconds
	I0911 12:14:21.243215 2255048 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:14:21.262148 2255048 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:14:21.802567 2255048 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:14:21.802822 2255048 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-352076 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:14:22.320035 2255048 kubeadm.go:322] [bootstrap-token] Using token: 3xtym4.6ytyj76o1n15fsq8
	I0911 12:14:22.321759 2255048 out.go:204]   - Configuring RBAC rules ...
	I0911 12:14:22.321922 2255048 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:14:22.329851 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:14:22.344882 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:14:22.349640 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:14:22.354357 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:14:22.359463 2255048 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:14:22.380068 2255048 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:14:22.713378 2255048 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:14:22.780207 2255048 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:14:22.780252 2255048 kubeadm.go:322] 
	I0911 12:14:22.780331 2255048 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:14:22.780344 2255048 kubeadm.go:322] 
	I0911 12:14:22.780441 2255048 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:14:22.780450 2255048 kubeadm.go:322] 
	I0911 12:14:22.780489 2255048 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:14:22.780568 2255048 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:14:22.780648 2255048 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:14:22.780657 2255048 kubeadm.go:322] 
	I0911 12:14:22.780757 2255048 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:14:22.780791 2255048 kubeadm.go:322] 
	I0911 12:14:22.780876 2255048 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:14:22.780895 2255048 kubeadm.go:322] 
	I0911 12:14:22.780958 2255048 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:14:22.781054 2255048 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:14:22.781157 2255048 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:14:22.781168 2255048 kubeadm.go:322] 
	I0911 12:14:22.781264 2255048 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:14:22.781363 2255048 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:14:22.781374 2255048 kubeadm.go:322] 
	I0911 12:14:22.781490 2255048 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3xtym4.6ytyj76o1n15fsq8 \
	I0911 12:14:22.781618 2255048 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:14:22.781684 2255048 kubeadm.go:322] 	--control-plane 
	I0911 12:14:22.781695 2255048 kubeadm.go:322] 
	I0911 12:14:22.781813 2255048 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:14:22.781830 2255048 kubeadm.go:322] 
	I0911 12:14:22.781956 2255048 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3xtym4.6ytyj76o1n15fsq8 \
	I0911 12:14:22.782107 2255048 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:14:22.783393 2255048 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:14:22.783423 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:14:22.783434 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:14:22.785623 2255048 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:14:22.787278 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:14:22.817914 2255048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:14:22.857165 2255048 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:14:22.857266 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:22.857282 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=no-preload-352076 minikube.k8s.io/updated_at=2023_09_11T12_14_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:23.375677 2255048 ops.go:34] apiserver oom_adj: -16
	I0911 12:14:23.375731 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:23.497980 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:24.128149 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:24.627110 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:25.127658 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:25.627595 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:26.127143 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:26.627803 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:27.128061 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:27.627169 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:28.128081 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:28.628055 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:29.127187 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:29.627707 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:30.127233 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:30.627943 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:31.127222 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:31.627921 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:32.127760 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:32.628112 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:33.128107 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:33.627835 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:34.127171 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:34.627113 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:35.127499 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:35.627255 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:36.127199 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:36.314187 2255048 kubeadm.go:1081] duration metric: took 13.456994708s to wait for elevateKubeSystemPrivileges.
	I0911 12:14:36.314241 2255048 kubeadm.go:406] StartCluster complete in 5m30.569752421s
	I0911 12:14:36.314272 2255048 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:14:36.314446 2255048 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:14:36.317402 2255048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:14:36.317739 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:14:36.318031 2255048 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:14:36.317936 2255048 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:14:36.318110 2255048 addons.go:69] Setting storage-provisioner=true in profile "no-preload-352076"
	I0911 12:14:36.318135 2255048 addons.go:231] Setting addon storage-provisioner=true in "no-preload-352076"
	I0911 12:14:36.318137 2255048 addons.go:69] Setting default-storageclass=true in profile "no-preload-352076"
	I0911 12:14:36.318148 2255048 addons.go:69] Setting metrics-server=true in profile "no-preload-352076"
	I0911 12:14:36.318163 2255048 addons.go:231] Setting addon metrics-server=true in "no-preload-352076"
	I0911 12:14:36.318164 2255048 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-352076"
	W0911 12:14:36.318169 2255048 addons.go:240] addon metrics-server should already be in state true
	I0911 12:14:36.318218 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	W0911 12:14:36.318143 2255048 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:14:36.318318 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	I0911 12:14:36.318696 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318710 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318720 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318723 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.318738 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.318741 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.337905 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0911 12:14:36.338002 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0911 12:14:36.338589 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.338678 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.339313 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.339317 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.339340 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.339363 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.339435 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0911 12:14:36.339903 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.339909 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.339981 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.340160 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.340463 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.340496 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.340588 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.340617 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.341051 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.341512 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.341540 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.359712 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0911 12:14:36.360342 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.360914 2255048 addons.go:231] Setting addon default-storageclass=true in "no-preload-352076"
	W0911 12:14:36.360941 2255048 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:14:36.360969 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	I0911 12:14:36.360969 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.360984 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.361238 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.361271 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.361350 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.361540 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.362624 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0911 12:14:36.363381 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.363731 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.364093 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.364114 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.366385 2255048 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:14:36.364716 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.368526 2255048 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:14:36.368557 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:14:36.368640 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.368799 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.371211 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.374123 2255048 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:14:36.373727 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.374507 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.376914 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.376951 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.376846 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:14:36.376970 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:14:36.376991 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.377194 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.377424 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.377656 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.380757 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.381482 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.381508 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.381537 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.381783 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.381953 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.382098 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.383003 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42733
	I0911 12:14:36.383415 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.383860 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.383884 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.384174 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.384600 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.384650 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.401421 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
	I0911 12:14:36.401987 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.402660 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.402684 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.403172 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.403456 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.406003 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.406531 2255048 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:14:36.406567 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:14:36.406593 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.410520 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.411016 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.411072 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.411331 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.411517 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.411723 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.411895 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.448234 2255048 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-352076" context rescaled to 1 replicas
	I0911 12:14:36.448281 2255048 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:14:36.450615 2255048 out.go:177] * Verifying Kubernetes components...
	I0911 12:14:36.452566 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:36.600188 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 12:14:36.600187 2255048 node_ready.go:35] waiting up to 6m0s for node "no-preload-352076" to be "Ready" ...
	I0911 12:14:36.611125 2255048 node_ready.go:49] node "no-preload-352076" has status "Ready":"True"
	I0911 12:14:36.611167 2255048 node_ready.go:38] duration metric: took 10.942009ms waiting for node "no-preload-352076" to be "Ready" ...
	I0911 12:14:36.611181 2255048 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:14:36.632729 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:14:36.632759 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:14:36.640639 2255048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:36.656421 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:14:36.659146 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:14:36.711603 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:14:36.711644 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:14:36.780574 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:14:36.780614 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:14:36.874964 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:14:38.569895 2255048 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.969647165s)
	I0911 12:14:38.569949 2255048 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0911 12:14:38.569895 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.91343277s)
	I0911 12:14:38.570001 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570017 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.570428 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.570469 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.570484 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570440 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.570495 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.570786 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.570801 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.570803 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.570820 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570830 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.571133 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.571183 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.571196 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.756212 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:14:39.258501 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.599303563s)
	I0911 12:14:39.258567 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.258581 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.258631 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.383622497s)
	I0911 12:14:39.258693 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.258713 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259000 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259069 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259129 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259139 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.259040 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259150 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259154 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259165 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.259178 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259042 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259444 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259444 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259468 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259514 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259605 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259620 2255048 addons.go:467] Verifying addon metrics-server=true in "no-preload-352076"
	I0911 12:14:39.261573 2255048 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0911 12:14:39.263513 2255048 addons.go:502] enable addons completed in 2.945573816s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0911 12:14:41.194698 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:14:41.682872 2255048 pod_ready.go:92] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.682904 2255048 pod_ready.go:81] duration metric: took 5.042231142s waiting for pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.682919 2255048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.685265 2255048 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ddttk" not found
	I0911 12:14:41.685295 2255048 pod_ready.go:81] duration metric: took 2.370305ms waiting for pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace to be "Ready" ...
	E0911 12:14:41.685306 2255048 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ddttk" not found
	I0911 12:14:41.685313 2255048 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.694255 2255048 pod_ready.go:92] pod "etcd-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.694295 2255048 pod_ready.go:81] duration metric: took 8.974837ms waiting for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.694309 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.700807 2255048 pod_ready.go:92] pod "kube-apiserver-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.700854 2255048 pod_ready.go:81] duration metric: took 6.536644ms waiting for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.700869 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.707895 2255048 pod_ready.go:92] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.707918 2255048 pod_ready.go:81] duration metric: took 7.041207ms waiting for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.707930 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5w2x" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.880293 2255048 pod_ready.go:92] pod "kube-proxy-f5w2x" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.880329 2255048 pod_ready.go:81] duration metric: took 172.39121ms waiting for pod "kube-proxy-f5w2x" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.880345 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:42.280038 2255048 pod_ready.go:92] pod "kube-scheduler-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:42.280066 2255048 pod_ready.go:81] duration metric: took 399.713688ms waiting for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:42.280074 2255048 pod_ready.go:38] duration metric: took 5.668879257s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:14:42.280093 2255048 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:14:42.280143 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:14:42.303868 2255048 api_server.go:72] duration metric: took 5.855535753s to wait for apiserver process to appear ...
	I0911 12:14:42.303906 2255048 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:14:42.303927 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:14:42.310890 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
	ok
	I0911 12:14:42.313428 2255048 api_server.go:141] control plane version: v1.28.1
	I0911 12:14:42.313455 2255048 api_server.go:131] duration metric: took 9.541682ms to wait for apiserver health ...
	I0911 12:14:42.313464 2255048 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:14:42.483863 2255048 system_pods.go:59] 8 kube-system pods found
	I0911 12:14:42.483895 2255048 system_pods.go:61] "coredns-5dd5756b68-6w2w7" [fe585a8f-a92f-4497-b399-d759c995f9e6] Running
	I0911 12:14:42.483900 2255048 system_pods.go:61] "etcd-no-preload-352076" [46c6483c-1301-4aa4-9851-b66116ac5b8a] Running
	I0911 12:14:42.483905 2255048 system_pods.go:61] "kube-apiserver-no-preload-352076" [a01ff260-818b-46a3-99c2-c5e4f8fa2610] Running
	I0911 12:14:42.483909 2255048 system_pods.go:61] "kube-controller-manager-no-preload-352076" [0f697ec3-cabf-41f8-bdb0-8b39a70deafd] Running
	I0911 12:14:42.483912 2255048 system_pods.go:61] "kube-proxy-f5w2x" [03e8a2b5-aaf8-4fd7-920e-033a44729398] Running
	I0911 12:14:42.483916 2255048 system_pods.go:61] "kube-scheduler-no-preload-352076" [ddbb31d1-9bc6-4466-bd7b-81f433959677] Running
	I0911 12:14:42.483923 2255048 system_pods.go:61] "metrics-server-57f55c9bc5-r8mgg" [a54edaa0-b800-48f3-99bc-7d38adb834d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:14:42.483930 2255048 system_pods.go:61] "storage-provisioner" [c5d1acfb-fa11-4a73-9176-21aee3e2ab99] Running
	I0911 12:14:42.483936 2255048 system_pods.go:74] duration metric: took 170.467243ms to wait for pod list to return data ...
	I0911 12:14:42.483945 2255048 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:14:42.679235 2255048 default_sa.go:45] found service account: "default"
	I0911 12:14:42.679270 2255048 default_sa.go:55] duration metric: took 195.319105ms for default service account to be created ...
	I0911 12:14:42.679284 2255048 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:14:42.883048 2255048 system_pods.go:86] 8 kube-system pods found
	I0911 12:14:42.883078 2255048 system_pods.go:89] "coredns-5dd5756b68-6w2w7" [fe585a8f-a92f-4497-b399-d759c995f9e6] Running
	I0911 12:14:42.883084 2255048 system_pods.go:89] "etcd-no-preload-352076" [46c6483c-1301-4aa4-9851-b66116ac5b8a] Running
	I0911 12:14:42.883089 2255048 system_pods.go:89] "kube-apiserver-no-preload-352076" [a01ff260-818b-46a3-99c2-c5e4f8fa2610] Running
	I0911 12:14:42.883093 2255048 system_pods.go:89] "kube-controller-manager-no-preload-352076" [0f697ec3-cabf-41f8-bdb0-8b39a70deafd] Running
	I0911 12:14:42.883097 2255048 system_pods.go:89] "kube-proxy-f5w2x" [03e8a2b5-aaf8-4fd7-920e-033a44729398] Running
	I0911 12:14:42.883103 2255048 system_pods.go:89] "kube-scheduler-no-preload-352076" [ddbb31d1-9bc6-4466-bd7b-81f433959677] Running
	I0911 12:14:42.883110 2255048 system_pods.go:89] "metrics-server-57f55c9bc5-r8mgg" [a54edaa0-b800-48f3-99bc-7d38adb834d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:14:42.883118 2255048 system_pods.go:89] "storage-provisioner" [c5d1acfb-fa11-4a73-9176-21aee3e2ab99] Running
	I0911 12:14:42.883126 2255048 system_pods.go:126] duration metric: took 203.835523ms to wait for k8s-apps to be running ...
	I0911 12:14:42.883133 2255048 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:14:42.883181 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:42.897962 2255048 system_svc.go:56] duration metric: took 14.812893ms WaitForService to wait for kubelet.
	I0911 12:14:42.898000 2255048 kubeadm.go:581] duration metric: took 6.449678905s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:14:42.898022 2255048 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:14:43.080859 2255048 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:14:43.080890 2255048 node_conditions.go:123] node cpu capacity is 2
	I0911 12:14:43.080901 2255048 node_conditions.go:105] duration metric: took 182.874167ms to run NodePressure ...
	I0911 12:14:43.080913 2255048 start.go:228] waiting for startup goroutines ...
	I0911 12:14:43.080919 2255048 start.go:233] waiting for cluster config update ...
	I0911 12:14:43.080930 2255048 start.go:242] writing updated cluster config ...
	I0911 12:14:43.081223 2255048 ssh_runner.go:195] Run: rm -f paused
	I0911 12:14:43.135636 2255048 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:14:43.137835 2255048 out.go:177] * Done! kubectl is now configured to use "no-preload-352076" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 12:08:33 UTC, ends at Mon 2023-09-11 12:23:45 UTC. --
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.806793054Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=22430838-e975-4e41-92df-754fe7b7e0d2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.806957502Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=22430838-e975-4e41-92df-754fe7b7e0d2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.807301984Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=22430838-e975-4e41-92df-754fe7b7e0d2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.845489667Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=42f751eb-9008-423f-954b-15d1ab0331a2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.845583587Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=42f751eb-9008-423f-954b-15d1ab0331a2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.845772781Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=42f751eb-9008-423f-954b-15d1ab0331a2 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.876559306Z" level=debug msg="Request: &VersionRequest{Version:0.1.0,}" file="go-grpc-middleware/chain.go:25" id=66ad1917-b7f2-4937-a18c-238fe19630ed name=/runtime.v1.RuntimeService/Version
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.876655599Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.24.1,RuntimeApiVersion:v1,}" file="go-grpc-middleware/chain.go:25" id=66ad1917-b7f2-4937-a18c-238fe19630ed name=/runtime.v1.RuntimeService/Version
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.886000900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bf3b814e-4f2a-4a30-ba12-d91261a4ef69 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.886188041Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bf3b814e-4f2a-4a30-ba12-d91261a4ef69 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.886718535Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bf3b814e-4f2a-4a30-ba12-d91261a4ef69 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.924452571Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=c437dc14-7877-4538-9ac8-871d5e26fd05 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.924520913Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=c437dc14-7877-4538-9ac8-871d5e26fd05 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.924723899Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=c437dc14-7877-4538-9ac8-871d5e26fd05 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.963333217Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=5fd6d467-0ad6-49d4-8c79-50f83fdcb10b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.963459721Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=5fd6d467-0ad6-49d4-8c79-50f83fdcb10b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.963649432Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=5fd6d467-0ad6-49d4-8c79-50f83fdcb10b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.993574189Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7d4b449a-d0f2-4bfd-b8f0-e52868a7a90e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.993665909Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=7d4b449a-d0f2-4bfd-b8f0-e52868a7a90e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:44 no-preload-352076 crio[710]: time="2023-09-11 12:23:44.993845619Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=7d4b449a-d0f2-4bfd-b8f0-e52868a7a90e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:23:45 no-preload-352076 crio[710]: time="2023-09-11 12:23:45.010463573Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=7542301c-5d37-4eb2-b6df-8b7760c06f44 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 11 12:23:45 no-preload-352076 crio[710]: time="2023-09-11 12:23:45.010740531Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:c5d1acfb-fa11-4a73-9176-21aee3e2ab99,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434479603650353,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-
system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2023-09-11T12:14:39.265714660Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c542194289f6021e4fccf1ebabc22225989fb23c4b922de11583c284ac08a69e,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-r8mgg,Uid:a54edaa0-b800-48f3-99bc-7d38adb834d0,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434479342278086,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-r8mgg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a54edaa0-b800-48f3-99bc-7d38adb834d0
,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:14:39.002611324Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-6w2w7,Uid:fe585a8f-a92f-4497-b399-d759c995f9e6,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434478050854464,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:14:36.178207279Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&PodSandboxMetadata{Name:kube-proxy-f5w2x,Uid:03e8a2b5-aaf8-4fd7-920e-033
a44729398,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434476435823585,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:14:36.082898995Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&PodSandboxMetadata{Name:kube-scheduler-no-preload-352076,Uid:f7cea54bc5023a25cc6c8d99a5d8b950,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434453072720920,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f7cea54bc5023a25cc6c8d99a5d8
b950,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f7cea54bc5023a25cc6c8d99a5d8b950,kubernetes.io/config.seen: 2023-09-11T12:14:12.494171870Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-no-preload-352076,Uid:e5c2adf841bc1ed23c1212ed6429e003,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434453066462452,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e5c2adf841bc1ed23c1212ed6429e003,kubernetes.io/config.seen: 2023-09-11T12:14:12.494171027Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:75302ad460aebfcfdd40f0
77b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&PodSandboxMetadata{Name:kube-apiserver-no-preload-352076,Uid:c08efb199081eefea7071b4f0ff8574c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434453025496789,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.72.157:8443,kubernetes.io/config.hash: c08efb199081eefea7071b4f0ff8574c,kubernetes.io/config.seen: 2023-09-11T12:14:12.494169925Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&PodSandboxMetadata{Name:etcd-no-preload-352076,Uid:8700a49322597c8b3583eccc1568ff8e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434
453019693397,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.72.157:2379,kubernetes.io/config.hash: 8700a49322597c8b3583eccc1568ff8e,kubernetes.io/config.seen: 2023-09-11T12:14:12.494165714Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=7542301c-5d37-4eb2-b6df-8b7760c06f44 name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 11 12:23:45 no-preload-352076 crio[710]: time="2023-09-11 12:23:45.011675982Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4878eb14-2963-401f-ad9c-4021d7bf21f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 12:23:45 no-preload-352076 crio[710]: time="2023-09-11 12:23:45.011757509Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4878eb14-2963-401f-ad9c-4021d7bf21f9 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 12:23:45 no-preload-352076 crio[710]: time="2023-09-11 12:23:45.011922862Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4878eb14-2963-401f-ad9c-4021d7bf21f9 name=/runtime.v1.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	0a0c88ff1a170       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   a10586f48a6b4
	14521a0d7dd6e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   ab9742ea8a542
	415dac0b82907       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   9 minutes ago       Running             kube-proxy                0                   7b12c788adaf9
	ffa489dcdfa40       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   9 minutes ago       Running             kube-scheduler            2                   15a95c507ead0
	262c730a5965c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   8f7ed7ddc0b5c
	286d8fe64e428       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   9 minutes ago       Running             kube-apiserver            2                   75302ad460aeb
	20d2f3a34c9c4       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   9 minutes ago       Running             kube-controller-manager   2                   b33e71d3d0f20
	
	* 
	* ==> coredns [14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37861 - 3620 "HINFO IN 8702822923671551097.7223591626485362324. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011145084s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-352076
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-352076
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=no-preload-352076
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T12_14_22_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 12:14:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-352076
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 12:23:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 12:19:49 +0000   Mon, 11 Sep 2023 12:14:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 12:19:49 +0000   Mon, 11 Sep 2023 12:14:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 12:19:49 +0000   Mon, 11 Sep 2023 12:14:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 12:19:49 +0000   Mon, 11 Sep 2023 12:14:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.157
	  Hostname:    no-preload-352076
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0122708b2c1a4702991090a6268bbc2f
	  System UUID:                0122708b-2c1a-4702-9910-90a6268bbc2f
	  Boot ID:                    658ac0ba-9db1-4043-b24a-5bbe17435b9e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-6w2w7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-no-preload-352076                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-apiserver-no-preload-352076             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-controller-manager-no-preload-352076    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 kube-proxy-f5w2x                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-no-preload-352076             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m22s
	  kube-system                 metrics-server-57f55c9bc5-r8mgg              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m7s
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m6s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m5s                   kube-proxy       
	  Normal  Starting                 9m33s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m33s (x9 over 9m33s)  kubelet          Node no-preload-352076 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m33s (x7 over 9m33s)  kubelet          Node no-preload-352076 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m33s (x7 over 9m33s)  kubelet          Node no-preload-352076 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m33s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m23s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m22s                  kubelet          Node no-preload-352076 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m22s                  kubelet          Node no-preload-352076 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m22s                  kubelet          Node no-preload-352076 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m22s                  kubelet          Node no-preload-352076 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m22s                  kubelet          Node no-preload-352076 status is now: NodeReady
	  Normal  RegisteredNode           9m10s                  node-controller  Node no-preload-352076 event: Registered Node no-preload-352076 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep11 12:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.102368] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.505583] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.952728] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.174470] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.583765] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.658194] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.134197] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.165519] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.126876] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.263940] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[Sep11 12:09] systemd-fstab-generator[1213]: Ignoring "noauto" for root device
	[ +19.715922] kauditd_printk_skb: 29 callbacks suppressed
	[Sep11 12:14] systemd-fstab-generator[3809]: Ignoring "noauto" for root device
	[ +10.831527] systemd-fstab-generator[4138]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb] <==
	* {"level":"info","ts":"2023-09-11T12:14:16.426237Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T12:14:16.426265Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2023-09-11T12:14:16.433844Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.72.157:2380"}
	{"level":"info","ts":"2023-09-11T12:14:16.433937Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.72.157:2380"}
	{"level":"info","ts":"2023-09-11T12:14:16.433752Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T12:14:16.439639Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"e97ba2b9037c192e","initial-advertise-peer-urls":["https://192.168.72.157:2380"],"listen-peer-urls":["https://192.168.72.157:2380"],"advertise-client-urls":["https://192.168.72.157:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.157:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T12:14:16.439746Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-11T12:14:16.751548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-11T12:14:16.751615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-11T12:14:16.751657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e received MsgPreVoteResp from e97ba2b9037c192e at term 1"}
	{"level":"info","ts":"2023-09-11T12:14:16.751672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e became candidate at term 2"}
	{"level":"info","ts":"2023-09-11T12:14:16.751678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e received MsgVoteResp from e97ba2b9037c192e at term 2"}
	{"level":"info","ts":"2023-09-11T12:14:16.751688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e became leader at term 2"}
	{"level":"info","ts":"2023-09-11T12:14:16.751707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e97ba2b9037c192e elected leader e97ba2b9037c192e at term 2"}
	{"level":"info","ts":"2023-09-11T12:14:16.753634Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e97ba2b9037c192e","local-member-attributes":"{Name:no-preload-352076 ClientURLs:[https://192.168.72.157:2379]}","request-path":"/0/members/e97ba2b9037c192e/attributes","cluster-id":"2d4154f8677556f0","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T12:14:16.753876Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:14:16.75412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T12:14:16.755358Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T12:14:16.755429Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T12:14:16.755485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2d4154f8677556f0","local-member-id":"e97ba2b9037c192e","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:14:16.755533Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T12:14:16.755755Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:14:16.755817Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:14:16.755493Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T12:14:16.762518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.157:2379"}
	
	* 
	* ==> kernel <==
	*  12:23:45 up 15 min,  0 users,  load average: 0.16, 0.23, 0.22
	Linux no-preload-352076 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18] <==
	* E0911 12:19:19.692995       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:19:19.694427       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:20:18.604792       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.109.239.255:443: connect: connection refused
	I0911 12:20:18.604985       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:20:19.693232       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:20:19.693291       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:20:19.693299       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 12:20:19.694740       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:20:19.694885       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:20:19.694919       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:21:18.604503       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.109.239.255:443: connect: connection refused
	I0911 12:21:18.604761       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0911 12:22:18.605261       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.109.239.255:443: connect: connection refused
	I0911 12:22:18.605322       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:22:19.693861       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:22:19.693916       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:22:19.693924       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 12:22:19.695180       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:22:19.695358       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:22:19.695394       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:23:18.605372       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.109.239.255:443: connect: connection refused
	I0911 12:23:18.605445       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067] <==
	* I0911 12:18:06.491351       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:18:35.978042       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:18:36.503562       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:19:05.986858       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:19:06.514185       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:19:35.994800       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:19:36.524782       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:20:06.005981       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:20:06.536232       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0911 12:20:30.035331       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="525.96µs"
	E0911 12:20:36.013723       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:20:36.546984       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0911 12:20:45.034122       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="142.043µs"
	E0911 12:21:06.020715       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:21:06.558888       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:21:36.028562       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:21:36.570121       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:22:06.035907       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:22:06.580872       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:22:36.044236       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:22:36.590477       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:23:06.053187       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:23:06.601013       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:23:36.060535       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:23:36.610952       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0] <==
	* I0911 12:14:39.009971       1 server_others.go:69] "Using iptables proxy"
	I0911 12:14:39.203688       1 node.go:141] Successfully retrieved node IP: 192.168.72.157
	I0911 12:14:39.491235       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 12:14:39.491324       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 12:14:39.497310       1 server_others.go:152] "Using iptables Proxier"
	I0911 12:14:39.497663       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 12:14:39.497978       1 server.go:846] "Version info" version="v1.28.1"
	I0911 12:14:39.498410       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 12:14:39.499603       1 config.go:188] "Starting service config controller"
	I0911 12:14:39.499762       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 12:14:39.499906       1 config.go:97] "Starting endpoint slice config controller"
	I0911 12:14:39.499987       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 12:14:39.503433       1 config.go:315] "Starting node config controller"
	I0911 12:14:39.503578       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 12:14:39.601660       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 12:14:39.654332       1 shared_informer.go:318] Caches are synced for service config
	I0911 12:14:39.654364       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c] <==
	* W0911 12:14:18.850392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 12:14:18.850422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0911 12:14:18.849994       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 12:14:18.850473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 12:14:19.711609       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 12:14:19.711736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 12:14:19.782452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0911 12:14:19.782548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0911 12:14:19.789783       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 12:14:19.789899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0911 12:14:19.855991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0911 12:14:19.856120       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0911 12:14:19.947335       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 12:14:19.947391       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 12:14:19.964462       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 12:14:19.964529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0911 12:14:20.045503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 12:14:20.045578       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0911 12:14:20.093463       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 12:14:20.093561       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0911 12:14:20.208016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 12:14:20.208223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0911 12:14:20.242961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 12:14:20.243169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0911 12:14:21.915567       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 12:08:33 UTC, ends at Mon 2023-09-11 12:23:45 UTC. --
	Sep 11 12:20:56 no-preload-352076 kubelet[4146]: E0911 12:20:56.011680    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:21:11 no-preload-352076 kubelet[4146]: E0911 12:21:11.014313    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:21:22 no-preload-352076 kubelet[4146]: E0911 12:21:22.012396    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:21:23 no-preload-352076 kubelet[4146]: E0911 12:21:23.157828    4146 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:21:23 no-preload-352076 kubelet[4146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:21:23 no-preload-352076 kubelet[4146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:21:23 no-preload-352076 kubelet[4146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:21:37 no-preload-352076 kubelet[4146]: E0911 12:21:37.011927    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:21:51 no-preload-352076 kubelet[4146]: E0911 12:21:51.013871    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:22:04 no-preload-352076 kubelet[4146]: E0911 12:22:04.011592    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:22:16 no-preload-352076 kubelet[4146]: E0911 12:22:16.011307    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:22:23 no-preload-352076 kubelet[4146]: E0911 12:22:23.156718    4146 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:22:23 no-preload-352076 kubelet[4146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:22:23 no-preload-352076 kubelet[4146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:22:23 no-preload-352076 kubelet[4146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:22:31 no-preload-352076 kubelet[4146]: E0911 12:22:31.011886    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:22:46 no-preload-352076 kubelet[4146]: E0911 12:22:46.011463    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:23:01 no-preload-352076 kubelet[4146]: E0911 12:23:01.012482    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:23:13 no-preload-352076 kubelet[4146]: E0911 12:23:13.016040    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:23:23 no-preload-352076 kubelet[4146]: E0911 12:23:23.156049    4146 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:23:23 no-preload-352076 kubelet[4146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:23:23 no-preload-352076 kubelet[4146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:23:23 no-preload-352076 kubelet[4146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:23:28 no-preload-352076 kubelet[4146]: E0911 12:23:28.012512    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:23:39 no-preload-352076 kubelet[4146]: E0911 12:23:39.011965    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	
	* 
	* ==> storage-provisioner [0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895] <==
	* I0911 12:14:40.681805       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 12:14:40.696183       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 12:14:40.696379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 12:14:40.708716       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 12:14:40.710557       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-352076_62ea1927-0c6d-4568-abab-cc82d93b0ac1!
	I0911 12:14:40.709035       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a9c563c4-5421-4c9d-90e2-aa74b649c30e", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-352076_62ea1927-0c6d-4568-abab-cc82d93b0ac1 became leader
	I0911 12:14:40.811537       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-352076_62ea1927-0c6d-4568-abab-cc82d93b0ac1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-352076 -n no-preload-352076
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-352076 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-r8mgg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-352076 describe pod metrics-server-57f55c9bc5-r8mgg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-352076 describe pod metrics-server-57f55c9bc5-r8mgg: exit status 1 (75.015053ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-r8mgg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-352076 describe pod metrics-server-57f55c9bc5-r8mgg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (543.29s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (537.41s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0911 12:18:47.569083 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 12:19:15.053045 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 12:20:10.620654 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 12:21:22.842706 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642215 -n old-k8s-version-642215
start_stop_delete_test.go:287: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-11 12:26:38.606128861 +0000 UTC m=+5403.918753767
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-642215 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-642215 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.78µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-642215 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642215 -n old-k8s-version-642215
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-642215 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-642215 logs -n 25: (1.684877393s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 11:57 UTC | 11 Sep 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| ssh     | cert-options-559775 ssh                                | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	|         | openssl x509 -text -noout -in                          |                              |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt                  |                              |         |         |                     |                     |
	| ssh     | -p cert-options-559775 -- sudo                         | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559775                                 | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-352076             | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 11:59 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-235462            | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642215        | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:01 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-226537 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	|         | disable-driver-mounts-226537                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:02 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-352076                  | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-484027  | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-235462                 | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642215             | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-484027       | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC | 11 Sep 23 12:13 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 12:04:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 12:04:58.034724 2255814 out.go:296] Setting OutFile to fd 1 ...
	I0911 12:04:58.034920 2255814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:04:58.034929 2255814 out.go:309] Setting ErrFile to fd 2...
	I0911 12:04:58.034933 2255814 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:04:58.035102 2255814 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 12:04:58.035709 2255814 out.go:303] Setting JSON to false
	I0911 12:04:58.036651 2255814 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":236849,"bootTime":1694197049,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 12:04:58.036727 2255814 start.go:138] virtualization: kvm guest
	I0911 12:04:58.039239 2255814 out.go:177] * [default-k8s-diff-port-484027] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 12:04:58.041110 2255814 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 12:04:58.041181 2255814 notify.go:220] Checking for updates...
	I0911 12:04:58.042795 2255814 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 12:04:58.044550 2255814 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:04:58.046047 2255814 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 12:04:58.047718 2255814 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 12:04:58.049343 2255814 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 12:04:58.051545 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:04:58.051956 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:04:58.052047 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:04:58.068212 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42889
	I0911 12:04:58.068649 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:04:58.069289 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:04:58.069318 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:04:58.069763 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:04:58.069987 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:04:58.070276 2255814 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 12:04:58.070629 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:04:58.070670 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:04:58.085941 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39345
	I0911 12:04:58.086461 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:04:58.086966 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:04:58.086995 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:04:58.087337 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:04:58.087522 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:04:58.126206 2255814 out.go:177] * Using the kvm2 driver based on existing profile
	I0911 12:04:58.127558 2255814 start.go:298] selected driver: kvm2
	I0911 12:04:58.127571 2255814 start.go:902] validating driver "kvm2" against &{Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuber
netesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRe
quested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:04:58.127716 2255814 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 12:04:58.128430 2255814 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:04:58.128555 2255814 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 12:04:58.144660 2255814 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 12:04:58.145091 2255814 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 12:04:58.145145 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:04:58.145159 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:04:58.145176 2255814 start_flags.go:321] config:
	{Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-48402
7 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/
home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:04:58.145377 2255814 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:04:58.147634 2255814 out.go:177] * Starting control plane node default-k8s-diff-port-484027 in cluster default-k8s-diff-port-484027
	I0911 12:04:56.741109 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:04:58.149438 2255814 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:04:58.149510 2255814 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 12:04:58.149543 2255814 cache.go:57] Caching tarball of preloaded images
	I0911 12:04:58.149650 2255814 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 12:04:58.149664 2255814 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 12:04:58.149825 2255814 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/config.json ...
	I0911 12:04:58.150070 2255814 start.go:365] acquiring machines lock for default-k8s-diff-port-484027: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:04:59.813165 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:05.893188 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:08.965171 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:15.045168 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:18.117188 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:24.197148 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:27.269089 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:33.349151 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:36.421191 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:42.501129 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:45.573209 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:51.653159 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:05:54.725153 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:00.805201 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:03.877105 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:09.957136 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:13.029119 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:19.109157 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:22.181096 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:28.261156 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:31.333179 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:37.413187 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:40.485239 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:46.565193 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:49.637182 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:55.717194 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:06:58.789181 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:04.869137 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:07.941096 2255048 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.72.157:22: connect: no route to host
	I0911 12:07:10.946790 2255187 start.go:369] acquired machines lock for "embed-certs-235462" in 4m28.227506413s
	I0911 12:07:10.946859 2255187 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:10.946884 2255187 fix.go:54] fixHost starting: 
	I0911 12:07:10.947279 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:10.947318 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:10.963823 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43713
	I0911 12:07:10.964352 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:10.965050 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:07:10.965086 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:10.965556 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:10.965804 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:10.965995 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:07:10.967759 2255187 fix.go:102] recreateIfNeeded on embed-certs-235462: state=Stopped err=<nil>
	I0911 12:07:10.967790 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	W0911 12:07:10.968000 2255187 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:10.970103 2255187 out.go:177] * Restarting existing kvm2 VM for "embed-certs-235462" ...
	I0911 12:07:10.971879 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Start
	I0911 12:07:10.972130 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring networks are active...
	I0911 12:07:10.973084 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring network default is active
	I0911 12:07:10.973424 2255187 main.go:141] libmachine: (embed-certs-235462) Ensuring network mk-embed-certs-235462 is active
	I0911 12:07:10.973888 2255187 main.go:141] libmachine: (embed-certs-235462) Getting domain xml...
	I0911 12:07:10.974726 2255187 main.go:141] libmachine: (embed-certs-235462) Creating domain...
	I0911 12:07:12.246736 2255187 main.go:141] libmachine: (embed-certs-235462) Waiting to get IP...
	I0911 12:07:12.247648 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.248019 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.248152 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.248016 2256167 retry.go:31] will retry after 245.040457ms: waiting for machine to come up
	I0911 12:07:12.494788 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.495311 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.495345 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.495247 2256167 retry.go:31] will retry after 312.634812ms: waiting for machine to come up
	I0911 12:07:10.943345 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:10.943403 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:07:10.946565 2255048 machine.go:91] provisioned docker machine in 4m37.405921901s
	I0911 12:07:10.946641 2255048 fix.go:56] fixHost completed within 4m37.430192233s
	I0911 12:07:10.946648 2255048 start.go:83] releasing machines lock for "no-preload-352076", held for 4m37.430236677s
	W0911 12:07:10.946673 2255048 start.go:672] error starting host: provision: host is not running
	W0911 12:07:10.946819 2255048 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0911 12:07:10.946833 2255048 start.go:687] Will try again in 5 seconds ...
	I0911 12:07:12.810038 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:12.810461 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:12.810496 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:12.810398 2256167 retry.go:31] will retry after 478.036066ms: waiting for machine to come up
	I0911 12:07:13.290252 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:13.290701 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:13.290731 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:13.290646 2256167 retry.go:31] will retry after 576.124591ms: waiting for machine to come up
	I0911 12:07:13.868555 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:13.868978 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:13.869004 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:13.868931 2256167 retry.go:31] will retry after 487.107859ms: waiting for machine to come up
	I0911 12:07:14.357765 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:14.358240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:14.358315 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:14.358173 2256167 retry.go:31] will retry after 903.857312ms: waiting for machine to come up
	I0911 12:07:15.263350 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:15.263852 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:15.263908 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:15.263777 2256167 retry.go:31] will retry after 830.555039ms: waiting for machine to come up
	I0911 12:07:16.096337 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:16.096743 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:16.096774 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:16.096696 2256167 retry.go:31] will retry after 1.307188723s: waiting for machine to come up
	I0911 12:07:17.406129 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:17.406558 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:17.406584 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:17.406512 2256167 retry.go:31] will retry after 1.681887732s: waiting for machine to come up
	I0911 12:07:15.947503 2255048 start.go:365] acquiring machines lock for no-preload-352076: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:07:19.090590 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:19.091013 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:19.091038 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:19.090965 2256167 retry.go:31] will retry after 2.013298988s: waiting for machine to come up
	I0911 12:07:21.105851 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:21.106384 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:21.106418 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:21.106319 2256167 retry.go:31] will retry after 2.714578164s: waiting for machine to come up
	I0911 12:07:23.823181 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:23.823687 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:23.823772 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:23.823623 2256167 retry.go:31] will retry after 2.321779277s: waiting for machine to come up
	I0911 12:07:26.147527 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:26.147956 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | unable to find current IP address of domain embed-certs-235462 in network mk-embed-certs-235462
	I0911 12:07:26.147986 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | I0911 12:07:26.147896 2256167 retry.go:31] will retry after 4.307300197s: waiting for machine to come up
	I0911 12:07:31.786165 2255304 start.go:369] acquired machines lock for "old-k8s-version-642215" in 4m38.564304718s
	I0911 12:07:31.786239 2255304 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:31.786261 2255304 fix.go:54] fixHost starting: 
	I0911 12:07:31.786754 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:31.786809 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:31.806853 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36137
	I0911 12:07:31.807320 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:31.807871 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:07:31.807906 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:31.808284 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:31.808473 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:31.808622 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:07:31.810311 2255304 fix.go:102] recreateIfNeeded on old-k8s-version-642215: state=Stopped err=<nil>
	I0911 12:07:31.810345 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	W0911 12:07:31.810524 2255304 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:31.813302 2255304 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-642215" ...
	I0911 12:07:30.458075 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.458554 2255187 main.go:141] libmachine: (embed-certs-235462) Found IP for machine: 192.168.50.96
	I0911 12:07:30.458579 2255187 main.go:141] libmachine: (embed-certs-235462) Reserving static IP address...
	I0911 12:07:30.458593 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has current primary IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.459036 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "embed-certs-235462", mac: "52:54:00:2b:a0:6e", ip: "192.168.50.96"} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.459066 2255187 main.go:141] libmachine: (embed-certs-235462) Reserved static IP address: 192.168.50.96
	I0911 12:07:30.459088 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | skip adding static IP to network mk-embed-certs-235462 - found existing host DHCP lease matching {name: "embed-certs-235462", mac: "52:54:00:2b:a0:6e", ip: "192.168.50.96"}
	I0911 12:07:30.459104 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Getting to WaitForSSH function...
	I0911 12:07:30.459117 2255187 main.go:141] libmachine: (embed-certs-235462) Waiting for SSH to be available...
	I0911 12:07:30.461594 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.461938 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.461979 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.462087 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Using SSH client type: external
	I0911 12:07:30.462109 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa (-rw-------)
	I0911 12:07:30.462146 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.96 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:07:30.462165 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | About to run SSH command:
	I0911 12:07:30.462200 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | exit 0
	I0911 12:07:30.556976 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | SSH cmd err, output: <nil>: 
	I0911 12:07:30.557370 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetConfigRaw
	I0911 12:07:30.558054 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:30.560898 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.561254 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.561292 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.561638 2255187 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/config.json ...
	I0911 12:07:30.561863 2255187 machine.go:88] provisioning docker machine ...
	I0911 12:07:30.561885 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:30.562128 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.562296 2255187 buildroot.go:166] provisioning hostname "embed-certs-235462"
	I0911 12:07:30.562315 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.562497 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.565095 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.565484 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.565519 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.565682 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:30.565852 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.566021 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.566126 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:30.566273 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:30.566796 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:30.566814 2255187 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-235462 && echo "embed-certs-235462" | sudo tee /etc/hostname
	I0911 12:07:30.706262 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-235462
	
	I0911 12:07:30.706294 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.709499 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.709822 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.709862 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.710067 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:30.710331 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.710598 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:30.710762 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:30.710986 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:30.711479 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:30.711503 2255187 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-235462' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-235462/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-235462' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:07:30.850084 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:30.850120 2255187 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:07:30.850141 2255187 buildroot.go:174] setting up certificates
	I0911 12:07:30.850155 2255187 provision.go:83] configureAuth start
	I0911 12:07:30.850168 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetMachineName
	I0911 12:07:30.850494 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:30.853326 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.853650 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.853680 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.853864 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.856233 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.856574 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:30.856639 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:30.856755 2255187 provision.go:138] copyHostCerts
	I0911 12:07:30.856844 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:07:30.856859 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:07:30.856933 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:07:30.857039 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:07:30.857050 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:07:30.857078 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:07:30.857143 2255187 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:07:30.857150 2255187 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:07:30.857170 2255187 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:07:30.857217 2255187 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.embed-certs-235462 san=[192.168.50.96 192.168.50.96 localhost 127.0.0.1 minikube embed-certs-235462]
	I0911 12:07:30.996533 2255187 provision.go:172] copyRemoteCerts
	I0911 12:07:30.996607 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:07:30.996643 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:30.999950 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.000333 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.000370 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.000514 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.000787 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.000978 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.001133 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.095524 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 12:07:31.121456 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:07:31.145813 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0911 12:07:31.171621 2255187 provision.go:86] duration metric: configureAuth took 321.448095ms
	I0911 12:07:31.171657 2255187 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:07:31.171880 2255187 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:07:31.171975 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.175276 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.175783 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.175819 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.176082 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.176356 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.176535 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.176724 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.177014 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:31.177500 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:31.177521 2255187 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:07:31.514064 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:07:31.514090 2255187 machine.go:91] provisioned docker machine in 952.213137ms
	I0911 12:07:31.514101 2255187 start.go:300] post-start starting for "embed-certs-235462" (driver="kvm2")
	I0911 12:07:31.514135 2255187 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:07:31.514188 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.514651 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:07:31.514705 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.517108 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.517563 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.517599 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.517819 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.518053 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.518243 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.518426 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.612293 2255187 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:07:31.616991 2255187 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:07:31.617022 2255187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:07:31.617143 2255187 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:07:31.617263 2255187 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:07:31.617393 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:07:31.627725 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:31.652196 2255187 start.go:303] post-start completed in 138.067305ms
	I0911 12:07:31.652232 2255187 fix.go:56] fixHost completed within 20.705348144s
	I0911 12:07:31.652264 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.655234 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.655598 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.655633 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.655810 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.656000 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.656236 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.656373 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.656547 2255187 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:31.657061 2255187 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.96 22 <nil> <nil>}
	I0911 12:07:31.657078 2255187 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:07:31.785981 2255187 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434051.730508911
	
	I0911 12:07:31.786019 2255187 fix.go:206] guest clock: 1694434051.730508911
	I0911 12:07:31.786029 2255187 fix.go:219] Guest: 2023-09-11 12:07:31.730508911 +0000 UTC Remote: 2023-09-11 12:07:31.65223725 +0000 UTC m=+289.079171252 (delta=78.271661ms)
	I0911 12:07:31.786076 2255187 fix.go:190] guest clock delta is within tolerance: 78.271661ms
	I0911 12:07:31.786082 2255187 start.go:83] releasing machines lock for "embed-certs-235462", held for 20.839248295s
	I0911 12:07:31.786115 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.786440 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:31.789427 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.789809 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.789844 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.790024 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.790717 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.790954 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:07:31.791062 2255187 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:07:31.791130 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.791177 2255187 ssh_runner.go:195] Run: cat /version.json
	I0911 12:07:31.791208 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:07:31.793991 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794359 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.794393 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794414 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.794669 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.794879 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.794871 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:31.794913 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:31.795104 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:07:31.795112 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.795289 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:07:31.795291 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.795468 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:07:31.795585 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:07:31.910483 2255187 ssh_runner.go:195] Run: systemctl --version
	I0911 12:07:31.916739 2255187 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:07:32.059621 2255187 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:07:32.066857 2255187 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:07:32.066955 2255187 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:07:32.084365 2255187 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:07:32.084401 2255187 start.go:466] detecting cgroup driver to use...
	I0911 12:07:32.084518 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:07:32.098782 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:07:32.111344 2255187 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:07:32.111421 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:07:32.124323 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:07:32.137910 2255187 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:07:32.244478 2255187 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:07:32.374160 2255187 docker.go:212] disabling docker service ...
	I0911 12:07:32.374262 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:07:32.387762 2255187 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:07:32.401120 2255187 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:07:32.522150 2255187 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:07:31.815672 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Start
	I0911 12:07:31.815900 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring networks are active...
	I0911 12:07:31.816771 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring network default is active
	I0911 12:07:31.817161 2255304 main.go:141] libmachine: (old-k8s-version-642215) Ensuring network mk-old-k8s-version-642215 is active
	I0911 12:07:31.817559 2255304 main.go:141] libmachine: (old-k8s-version-642215) Getting domain xml...
	I0911 12:07:31.818275 2255304 main.go:141] libmachine: (old-k8s-version-642215) Creating domain...
	I0911 12:07:32.639647 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:07:32.658106 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:07:32.677573 2255187 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:07:32.677658 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.687407 2255187 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:07:32.687499 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.697706 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.707515 2255187 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:32.718090 2255187 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:07:32.728668 2255187 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:07:32.737652 2255187 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:07:32.737732 2255187 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:07:32.751510 2255187 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:07:32.760774 2255187 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:07:32.881718 2255187 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:07:33.064736 2255187 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:07:33.064859 2255187 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:07:33.071112 2255187 start.go:534] Will wait 60s for crictl version
	I0911 12:07:33.071195 2255187 ssh_runner.go:195] Run: which crictl
	I0911 12:07:33.075202 2255187 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:07:33.111795 2255187 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:07:33.111898 2255187 ssh_runner.go:195] Run: crio --version
	I0911 12:07:33.162455 2255187 ssh_runner.go:195] Run: crio --version
	I0911 12:07:33.224538 2255187 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:07:33.226156 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetIP
	I0911 12:07:33.229640 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:33.230164 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:07:33.230202 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:07:33.230434 2255187 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0911 12:07:33.235232 2255187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:33.248016 2255187 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:07:33.248094 2255187 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:33.290506 2255187 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:07:33.290594 2255187 ssh_runner.go:195] Run: which lz4
	I0911 12:07:33.294802 2255187 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:07:33.299115 2255187 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:07:33.299169 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:07:35.241115 2255187 crio.go:444] Took 1.946355 seconds to copy over tarball
	I0911 12:07:35.241211 2255187 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:07:33.131519 2255304 main.go:141] libmachine: (old-k8s-version-642215) Waiting to get IP...
	I0911 12:07:33.132551 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.133144 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.133255 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.133123 2256281 retry.go:31] will retry after 206.885556ms: waiting for machine to come up
	I0911 12:07:33.341966 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.342472 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.342495 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.342420 2256281 retry.go:31] will retry after 235.74047ms: waiting for machine to come up
	I0911 12:07:33.580161 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.580683 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.580720 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.580644 2256281 retry.go:31] will retry after 407.752379ms: waiting for machine to come up
	I0911 12:07:33.990505 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:33.991033 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:33.991099 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:33.991019 2256281 retry.go:31] will retry after 579.085044ms: waiting for machine to come up
	I0911 12:07:34.571958 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:34.572419 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:34.572451 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:34.572371 2256281 retry.go:31] will retry after 584.464544ms: waiting for machine to come up
	I0911 12:07:35.158152 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:35.158644 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:35.158677 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:35.158579 2256281 retry.go:31] will retry after 750.2868ms: waiting for machine to come up
	I0911 12:07:35.910364 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:35.910949 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:35.910983 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:35.910887 2256281 retry.go:31] will retry after 981.989906ms: waiting for machine to come up
	I0911 12:07:36.894691 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:36.895196 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:36.895233 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:36.895151 2256281 retry.go:31] will retry after 1.082443232s: waiting for machine to come up
	I0911 12:07:37.979265 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:37.979773 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:37.979802 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:37.979699 2256281 retry.go:31] will retry after 1.429811083s: waiting for machine to come up
	I0911 12:07:38.272328 2255187 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.031081597s)
	I0911 12:07:38.272378 2255187 crio.go:451] Took 3.031222 seconds to extract the tarball
	I0911 12:07:38.272392 2255187 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:07:38.314797 2255187 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:38.363925 2255187 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:07:38.363956 2255187 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:07:38.364034 2255187 ssh_runner.go:195] Run: crio config
	I0911 12:07:38.433884 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:07:38.433915 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:07:38.433941 2255187 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:07:38.433969 2255187 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.96 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-235462 NodeName:embed-certs-235462 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.96"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.96 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath
:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:07:38.434156 2255187 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.96
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-235462"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.96
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.96"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:07:38.434250 2255187 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=embed-certs-235462 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.96
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:embed-certs-235462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:07:38.434339 2255187 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:07:38.447171 2255187 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:07:38.447273 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:07:38.459426 2255187 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0911 12:07:38.478081 2255187 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:07:38.495571 2255187 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2102 bytes)
	I0911 12:07:38.514602 2255187 ssh_runner.go:195] Run: grep 192.168.50.96	control-plane.minikube.internal$ /etc/hosts
	I0911 12:07:38.518616 2255187 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.96	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:38.531178 2255187 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462 for IP: 192.168.50.96
	I0911 12:07:38.531246 2255187 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:07:38.531410 2255187 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:07:38.531471 2255187 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:07:38.531565 2255187 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/client.key
	I0911 12:07:38.531650 2255187 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.key.8e4e34e1
	I0911 12:07:38.531705 2255187 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.key
	I0911 12:07:38.531860 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:07:38.531918 2255187 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:07:38.531933 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:07:38.531976 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:07:38.532020 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:07:38.532071 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:07:38.532140 2255187 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:38.532870 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:07:38.558426 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0911 12:07:38.582526 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:07:38.606798 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/embed-certs-235462/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:07:38.630691 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:07:38.655580 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:07:38.682355 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:07:38.707701 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:07:38.732346 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:07:38.757688 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:07:38.783458 2255187 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:07:38.808481 2255187 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:07:38.825822 2255187 ssh_runner.go:195] Run: openssl version
	I0911 12:07:38.831897 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:07:38.842170 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.847385 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.847467 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:07:38.853456 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:07:38.864049 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:07:38.874236 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.879391 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.879463 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:07:38.885352 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:07:38.895225 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:07:38.905599 2255187 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.910660 2255187 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.910748 2255187 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:07:38.916920 2255187 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:07:38.927096 2255187 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:07:38.932313 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:07:38.939081 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:07:38.946028 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:07:38.952644 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:07:38.959391 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:07:38.965871 2255187 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:07:38.972698 2255187 kubeadm.go:404] StartCluster: {Name:embed-certs-235462 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:embed-certs-235462 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:07:38.972838 2255187 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:07:38.972906 2255187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:07:39.006683 2255187 cri.go:89] found id: ""
	I0911 12:07:39.006780 2255187 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:07:39.017143 2255187 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:07:39.017173 2255187 kubeadm.go:636] restartCluster start
	I0911 12:07:39.017256 2255187 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:07:39.029483 2255187 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.031111 2255187 kubeconfig.go:92] found "embed-certs-235462" server: "https://192.168.50.96:8443"
	I0911 12:07:39.034708 2255187 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:07:39.046851 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.046919 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.058732 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.058756 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.058816 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.070011 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.570811 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:39.570945 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:39.583538 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:40.071137 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:40.071264 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:40.083997 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:40.570532 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:40.570646 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:40.583202 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:41.070241 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:41.070369 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:41.082992 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:41.570284 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:41.570420 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:41.582669 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:42.070231 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:42.070341 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:42.086964 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:42.570487 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:42.570592 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:42.582618 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:39.411715 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:39.412168 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:39.412203 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:39.412129 2256281 retry.go:31] will retry after 2.048771803s: waiting for machine to come up
	I0911 12:07:41.463672 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:41.464124 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:41.464160 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:41.464061 2256281 retry.go:31] will retry after 2.459765131s: waiting for machine to come up
	I0911 12:07:43.071070 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:43.071249 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:43.087309 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:43.570993 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:43.571105 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:43.586884 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:44.070402 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:44.070525 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:44.082541 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:44.571170 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:44.571303 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:44.583295 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:45.070902 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:45.071002 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:45.087666 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:45.570274 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:45.570400 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:45.587352 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:46.070596 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:46.070729 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:46.082939 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:46.570445 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:46.570559 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:46.582782 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:47.070351 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:47.070485 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:47.082518 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:47.571060 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:47.571155 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:47.583891 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:43.926561 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:43.926941 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:43.926983 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:43.926918 2256281 retry.go:31] will retry after 2.467825155s: waiting for machine to come up
	I0911 12:07:46.396258 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:46.396703 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:46.396736 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:46.396622 2256281 retry.go:31] will retry after 3.885293775s: waiting for machine to come up
	I0911 12:07:48.070904 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:48.070994 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:48.083706 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:48.570268 2255187 api_server.go:166] Checking apiserver status ...
	I0911 12:07:48.570404 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:07:48.582255 2255187 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:07:49.047880 2255187 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:07:49.047929 2255187 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:07:49.047951 2255187 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:07:49.048052 2255187 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:07:49.081907 2255187 cri.go:89] found id: ""
	I0911 12:07:49.082024 2255187 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:07:49.099563 2255187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:07:49.109373 2255187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:07:49.109450 2255187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:07:49.119162 2255187 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:07:49.119210 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:49.251091 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:49.995928 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.192421 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.288496 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:50.365849 2255187 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:07:50.365943 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.383262 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.901757 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:51.401967 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:51.901613 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:52.402067 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:50.285991 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:50.286515 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | unable to find current IP address of domain old-k8s-version-642215 in network mk-old-k8s-version-642215
	I0911 12:07:50.286547 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | I0911 12:07:50.286433 2256281 retry.go:31] will retry after 3.948880306s: waiting for machine to come up
	I0911 12:07:55.614569 2255814 start.go:369] acquired machines lock for "default-k8s-diff-port-484027" in 2m57.464444695s
	I0911 12:07:55.614642 2255814 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:07:55.614662 2255814 fix.go:54] fixHost starting: 
	I0911 12:07:55.615164 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:07:55.615208 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:07:55.635996 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38399
	I0911 12:07:55.636556 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:07:55.637268 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:07:55.637295 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:07:55.637758 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:07:55.638000 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:07:55.638191 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:07:55.640059 2255814 fix.go:102] recreateIfNeeded on default-k8s-diff-port-484027: state=Stopped err=<nil>
	I0911 12:07:55.640086 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	W0911 12:07:55.640254 2255814 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:07:55.643100 2255814 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-484027" ...
	I0911 12:07:54.236661 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.237200 2255304 main.go:141] libmachine: (old-k8s-version-642215) Found IP for machine: 192.168.61.58
	I0911 12:07:54.237226 2255304 main.go:141] libmachine: (old-k8s-version-642215) Reserving static IP address...
	I0911 12:07:54.237241 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has current primary IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.237676 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "old-k8s-version-642215", mac: "52:54:00:4e:60:8b", ip: "192.168.61.58"} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.237717 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | skip adding static IP to network mk-old-k8s-version-642215 - found existing host DHCP lease matching {name: "old-k8s-version-642215", mac: "52:54:00:4e:60:8b", ip: "192.168.61.58"}
	I0911 12:07:54.237736 2255304 main.go:141] libmachine: (old-k8s-version-642215) Reserved static IP address: 192.168.61.58
	I0911 12:07:54.237756 2255304 main.go:141] libmachine: (old-k8s-version-642215) Waiting for SSH to be available...
	I0911 12:07:54.237773 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Getting to WaitForSSH function...
	I0911 12:07:54.240007 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.240469 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.240521 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.240610 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Using SSH client type: external
	I0911 12:07:54.240642 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa (-rw-------)
	I0911 12:07:54.240679 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.58 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:07:54.240700 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | About to run SSH command:
	I0911 12:07:54.240715 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | exit 0
	I0911 12:07:54.337416 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | SSH cmd err, output: <nil>: 
	I0911 12:07:54.337857 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetConfigRaw
	I0911 12:07:54.338666 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:54.341640 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.341973 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.342025 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.342296 2255304 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/config.json ...
	I0911 12:07:54.342549 2255304 machine.go:88] provisioning docker machine ...
	I0911 12:07:54.342573 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:54.342809 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.342965 2255304 buildroot.go:166] provisioning hostname "old-k8s-version-642215"
	I0911 12:07:54.342986 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.343133 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.345466 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.345848 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.345881 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.346024 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.346214 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.346366 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.346491 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.346713 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.347165 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.347184 2255304 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-642215 && echo "old-k8s-version-642215" | sudo tee /etc/hostname
	I0911 12:07:54.487005 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-642215
	
	I0911 12:07:54.487058 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.489843 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.490146 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.490175 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.490378 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.490603 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.490774 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.490931 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.491146 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.491586 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.491612 2255304 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-642215' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-642215/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-642215' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:07:54.631441 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:07:54.631474 2255304 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:07:54.631500 2255304 buildroot.go:174] setting up certificates
	I0911 12:07:54.631513 2255304 provision.go:83] configureAuth start
	I0911 12:07:54.631525 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetMachineName
	I0911 12:07:54.631988 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:54.634992 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.635411 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.635448 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.635700 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.638219 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.638608 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.638646 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.638788 2255304 provision.go:138] copyHostCerts
	I0911 12:07:54.638870 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:07:54.638881 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:07:54.638957 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:07:54.639087 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:07:54.639099 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:07:54.639128 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:07:54.639278 2255304 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:07:54.639293 2255304 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:07:54.639322 2255304 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:07:54.639405 2255304 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-642215 san=[192.168.61.58 192.168.61.58 localhost 127.0.0.1 minikube old-k8s-version-642215]
	I0911 12:07:54.792963 2255304 provision.go:172] copyRemoteCerts
	I0911 12:07:54.793027 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:07:54.793056 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.796196 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.796555 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.796592 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.796884 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.797124 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.797410 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.797620 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:54.895690 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0911 12:07:54.923392 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 12:07:54.951276 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:07:54.979345 2255304 provision.go:86] duration metric: configureAuth took 347.814948ms
	I0911 12:07:54.979383 2255304 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:07:54.979690 2255304 config.go:182] Loaded profile config "old-k8s-version-642215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0911 12:07:54.979805 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:54.982955 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.983405 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:54.983448 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:54.983618 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:54.983822 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.984020 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:54.984190 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:54.984377 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:54.984924 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:54.984948 2255304 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:07:55.330958 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:07:55.330995 2255304 machine.go:91] provisioned docker machine in 988.429681ms
	I0911 12:07:55.331008 2255304 start.go:300] post-start starting for "old-k8s-version-642215" (driver="kvm2")
	I0911 12:07:55.331021 2255304 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:07:55.331049 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.331490 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:07:55.331536 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.334936 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.335425 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.335467 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.335645 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.335902 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.336075 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.336290 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.439126 2255304 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:07:55.445330 2255304 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:07:55.445370 2255304 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:07:55.445453 2255304 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:07:55.445564 2255304 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:07:55.445692 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:07:55.455235 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:07:55.480979 2255304 start.go:303] post-start completed in 149.950869ms
	I0911 12:07:55.481014 2255304 fix.go:56] fixHost completed within 23.694753941s
	I0911 12:07:55.481046 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.484222 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.484612 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.484647 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.484879 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.485159 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.485352 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.485527 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.485696 2255304 main.go:141] libmachine: Using SSH client type: native
	I0911 12:07:55.486109 2255304 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.58 22 <nil> <nil>}
	I0911 12:07:55.486122 2255304 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:07:55.614312 2255304 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434075.554093051
	
	I0911 12:07:55.614344 2255304 fix.go:206] guest clock: 1694434075.554093051
	I0911 12:07:55.614355 2255304 fix.go:219] Guest: 2023-09-11 12:07:55.554093051 +0000 UTC Remote: 2023-09-11 12:07:55.481020512 +0000 UTC m=+302.412352865 (delta=73.072539ms)
	I0911 12:07:55.614409 2255304 fix.go:190] guest clock delta is within tolerance: 73.072539ms
	I0911 12:07:55.614423 2255304 start.go:83] releasing machines lock for "old-k8s-version-642215", held for 23.828210342s
	I0911 12:07:55.614465 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.614816 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:55.617993 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.618444 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.618489 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.618674 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619275 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619473 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:07:55.619611 2255304 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:07:55.619674 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.619732 2255304 ssh_runner.go:195] Run: cat /version.json
	I0911 12:07:55.619767 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:07:55.622428 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.622846 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.622873 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.622894 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.623012 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.623191 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.623279 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:55.623302 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:55.623399 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.623465 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:07:55.623543 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.623615 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:07:55.623747 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:07:55.623891 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:07:55.742462 2255304 ssh_runner.go:195] Run: systemctl --version
	I0911 12:07:55.748982 2255304 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:07:55.906639 2255304 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:07:55.914088 2255304 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:07:55.914183 2255304 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:07:55.938200 2255304 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:07:55.938240 2255304 start.go:466] detecting cgroup driver to use...
	I0911 12:07:55.938333 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:07:55.965549 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:07:55.986227 2255304 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:07:55.986308 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:07:56.003370 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:07:56.025702 2255304 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:07:56.158835 2255304 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:07:56.311687 2255304 docker.go:212] disabling docker service ...
	I0911 12:07:56.311770 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:07:56.337492 2255304 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:07:56.355858 2255304 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:07:56.486823 2255304 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:07:56.617414 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:07:56.634057 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:07:56.658242 2255304 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.1" pause image...
	I0911 12:07:56.658370 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.1"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.670146 2255304 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:07:56.670252 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.681790 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.695832 2255304 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:07:56.707434 2255304 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:07:56.718631 2255304 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:07:56.729355 2255304 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:07:56.729436 2255304 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:07:56.744591 2255304 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:07:56.755374 2255304 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:07:56.906693 2255304 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:07:57.131296 2255304 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:07:57.131439 2255304 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:07:57.137554 2255304 start.go:534] Will wait 60s for crictl version
	I0911 12:07:57.137645 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:07:57.141720 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:07:57.178003 2255304 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:07:57.178110 2255304 ssh_runner.go:195] Run: crio --version
	I0911 12:07:57.236871 2255304 ssh_runner.go:195] Run: crio --version
	I0911 12:07:57.303639 2255304 out.go:177] * Preparing Kubernetes v1.16.0 on CRI-O 1.24.1 ...
	I0911 12:07:52.901170 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:53.401940 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:07:53.430776 2255187 api_server.go:72] duration metric: took 3.064926262s to wait for apiserver process to appear ...
	I0911 12:07:53.430809 2255187 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:07:53.430837 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:53.431478 2255187 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0911 12:07:53.431528 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:53.431982 2255187 api_server.go:269] stopped: https://192.168.50.96:8443/healthz: Get "https://192.168.50.96:8443/healthz": dial tcp 192.168.50.96:8443: connect: connection refused
	I0911 12:07:53.932765 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.216903 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:07:56.216947 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:07:56.216964 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.322957 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:07:56.322994 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:07:56.432419 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.444961 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:07:56.445016 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:07:56.932209 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:56.942202 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:07:56.942242 2255187 api_server.go:103] status: https://192.168.50.96:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:07:57.432361 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:07:57.440671 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 200:
	ok
	I0911 12:07:57.453348 2255187 api_server.go:141] control plane version: v1.28.1
	I0911 12:07:57.453393 2255187 api_server.go:131] duration metric: took 4.0225758s to wait for apiserver health ...
	I0911 12:07:57.453408 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:07:57.453418 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:07:57.455939 2255187 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:07:57.457968 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:07:57.488156 2255187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:07:57.524742 2255187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:07:57.543532 2255187 system_pods.go:59] 8 kube-system pods found
	I0911 12:07:57.543601 2255187 system_pods.go:61] "coredns-5dd5756b68-pkzcf" [4a44c7ec-bb5b-40f0-8d44-d5b77666cb95] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:07:57.543616 2255187 system_pods.go:61] "etcd-embed-certs-235462" [c14f9910-0d1d-4494-9ebe-97173ab9abe9] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:07:57.543671 2255187 system_pods.go:61] "kube-apiserver-embed-certs-235462" [4d95f49f-f9ad-40ce-9101-7e67ad978353] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:07:57.543686 2255187 system_pods.go:61] "kube-controller-manager-embed-certs-235462" [753eea69-23f4-46f8-b631-36cf0f34d663] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:07:57.543701 2255187 system_pods.go:61] "kube-proxy-v24dz" [e527b198-cf8f-4ada-af22-7979b249efd5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:07:57.543711 2255187 system_pods.go:61] "kube-scheduler-embed-certs-235462" [b092d336-c45d-4b2c-87a5-df253a5fddbd] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:07:57.543722 2255187 system_pods.go:61] "metrics-server-57f55c9bc5-ldjwn" [4761a51f-8912-4be4-aa1d-2574e10da791] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:07:57.543735 2255187 system_pods.go:61] "storage-provisioner" [810336ff-14a1-4b3d-a4ff-2569f3710bab] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:07:57.543744 2255187 system_pods.go:74] duration metric: took 18.975758ms to wait for pod list to return data ...
	I0911 12:07:57.543770 2255187 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:07:57.550468 2255187 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:07:57.550512 2255187 node_conditions.go:123] node cpu capacity is 2
	I0911 12:07:57.550527 2255187 node_conditions.go:105] duration metric: took 6.741621ms to run NodePressure ...
	I0911 12:07:57.550552 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:07:55.644857 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Start
	I0911 12:07:55.645094 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring networks are active...
	I0911 12:07:55.646010 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring network default is active
	I0911 12:07:55.646393 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Ensuring network mk-default-k8s-diff-port-484027 is active
	I0911 12:07:55.646808 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Getting domain xml...
	I0911 12:07:55.647513 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Creating domain...
	I0911 12:07:57.083879 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting to get IP...
	I0911 12:07:57.084769 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.085290 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.085361 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.085279 2256448 retry.go:31] will retry after 226.596764ms: waiting for machine to come up
	I0911 12:07:57.313593 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.314083 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.314106 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.314029 2256448 retry.go:31] will retry after 315.605673ms: waiting for machine to come up
	I0911 12:07:57.631774 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.632292 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.632329 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:57.632179 2256448 retry.go:31] will retry after 400.211275ms: waiting for machine to come up
	I0911 12:07:58.034189 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:57.305610 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetIP
	I0911 12:07:57.309276 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:57.309677 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:07:57.309721 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:07:57.310066 2255304 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0911 12:07:57.316611 2255304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:07:57.335580 2255304 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0911 12:07:57.335689 2255304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:07:57.380592 2255304 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0911 12:07:57.380690 2255304 ssh_runner.go:195] Run: which lz4
	I0911 12:07:57.386023 2255304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:07:57.391807 2255304 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:07:57.391861 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (441050307 bytes)
	I0911 12:07:58.002314 2255187 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:07:58.010948 2255187 kubeadm.go:787] kubelet initialised
	I0911 12:07:58.010981 2255187 kubeadm.go:788] duration metric: took 8.627903ms waiting for restarted kubelet to initialise ...
	I0911 12:07:58.010993 2255187 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:07:58.020253 2255187 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.027844 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.027876 2255187 pod_ready.go:81] duration metric: took 7.583678ms waiting for pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.027888 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "coredns-5dd5756b68-pkzcf" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.027900 2255187 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.050283 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "etcd-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.050321 2255187 pod_ready.go:81] duration metric: took 22.413628ms waiting for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.050352 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "etcd-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.050369 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.060314 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.060348 2255187 pod_ready.go:81] duration metric: took 9.962502ms waiting for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.060360 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.060371 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:58.069122 2255187 pod_ready.go:97] node "embed-certs-235462" hosting pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.069152 2255187 pod_ready.go:81] duration metric: took 8.771982ms waiting for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	E0911 12:07:58.069164 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-235462" hosting pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-235462" has status "Ready":"False"
	I0911 12:07:58.069176 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-v24dz" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:59.329758 2255187 pod_ready.go:92] pod "kube-proxy-v24dz" in "kube-system" namespace has status "Ready":"True"
	I0911 12:07:59.329789 2255187 pod_ready.go:81] duration metric: took 1.260592229s waiting for pod "kube-proxy-v24dz" in "kube-system" namespace to be "Ready" ...
	I0911 12:07:59.329804 2255187 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:01.526483 2255187 pod_ready.go:102] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"False"
	I0911 12:07:58.034838 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.037141 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:58.034724 2256448 retry.go:31] will retry after 394.484585ms: waiting for machine to come up
	I0911 12:07:58.431365 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.431982 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:58.432004 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:58.431886 2256448 retry.go:31] will retry after 593.506569ms: waiting for machine to come up
	I0911 12:07:59.026841 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.027490 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.027518 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:59.027389 2256448 retry.go:31] will retry after 666.166785ms: waiting for machine to come up
	I0911 12:07:59.694652 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.695161 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:07:59.695191 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:07:59.695113 2256448 retry.go:31] will retry after 975.320046ms: waiting for machine to come up
	I0911 12:08:00.672258 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:00.672804 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:00.672851 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:00.672755 2256448 retry.go:31] will retry after 1.161656415s: waiting for machine to come up
	I0911 12:08:01.835653 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:01.836186 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:01.836223 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:01.836130 2256448 retry.go:31] will retry after 1.505608393s: waiting for machine to come up
	I0911 12:07:59.503695 2255304 crio.go:444] Took 2.117718 seconds to copy over tarball
	I0911 12:07:59.503800 2255304 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:08:02.939001 2255304 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.435164165s)
	I0911 12:08:02.939037 2255304 crio.go:451] Took 3.435307 seconds to extract the tarball
	I0911 12:08:02.939050 2255304 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:08:02.984446 2255304 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:03.037419 2255304 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.16.0". assuming images are not preloaded.
	I0911 12:08:03.037452 2255304 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.16.0 registry.k8s.io/kube-controller-manager:v1.16.0 registry.k8s.io/kube-scheduler:v1.16.0 registry.k8s.io/kube-proxy:v1.16.0 registry.k8s.io/pause:3.1 registry.k8s.io/etcd:3.3.15-0 registry.k8s.io/coredns:1.6.2 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 12:08:03.037546 2255304 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.037582 2255304 image.go:134] retrieving image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.037597 2255304 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.037628 2255304 image.go:134] retrieving image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.037583 2255304 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.037607 2255304 image.go:134] retrieving image: registry.k8s.io/pause:3.1
	I0911 12:08:03.037551 2255304 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.037549 2255304 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.039413 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.039639 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.039650 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.039650 2255304 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.039819 2255304 image.go:177] daemon lookup for registry.k8s.io/coredns:1.6.2: Error response from daemon: No such image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.039854 2255304 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.16.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.040031 2255304 image.go:177] daemon lookup for registry.k8s.io/etcd:3.3.15-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.040241 2255304 image.go:177] daemon lookup for registry.k8s.io/pause:3.1: Error response from daemon: No such image: registry.k8s.io/pause:3.1
	I0911 12:08:03.815561 2255187 pod_ready.go:102] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:04.614171 2255187 pod_ready.go:92] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:04.614199 2255187 pod_ready.go:81] duration metric: took 5.28438743s waiting for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:04.614211 2255187 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:06.638688 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:03.343936 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:03.353931 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:03.353970 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:03.344315 2256448 retry.go:31] will retry after 1.414606279s: waiting for machine to come up
	I0911 12:08:04.761183 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:04.761667 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:04.761695 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:04.761607 2256448 retry.go:31] will retry after 1.846261641s: waiting for machine to come up
	I0911 12:08:06.609258 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:06.609917 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:06.609965 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:06.609851 2256448 retry.go:31] will retry after 2.938814697s: waiting for machine to come up
	I0911 12:08:03.225129 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.227566 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.231565 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.233817 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.239841 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.1
	I0911 12:08:03.243250 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.247155 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.522779 2255304 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:03.711354 2255304 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.16.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.16.0" does not exist at hash "b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e" in container runtime
	I0911 12:08:03.711381 2255304 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.16.0" needs transfer: "registry.k8s.io/kube-proxy:v1.16.0" does not exist at hash "c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384" in container runtime
	I0911 12:08:03.711438 2255304 cache_images.go:116] "registry.k8s.io/coredns:1.6.2" needs transfer: "registry.k8s.io/coredns:1.6.2" does not exist at hash "bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b" in container runtime
	I0911 12:08:03.711473 2255304 cri.go:218] Removing image: registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.711501 2255304 cache_images.go:116] "registry.k8s.io/pause:3.1" needs transfer: "registry.k8s.io/pause:3.1" does not exist at hash "da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e" in container runtime
	I0911 12:08:03.711514 2255304 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.16.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.16.0" does not exist at hash "06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d" in container runtime
	I0911 12:08:03.711530 2255304 cri.go:218] Removing image: registry.k8s.io/pause:3.1
	I0911 12:08:03.711602 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711641 2255304 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.16.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.16.0" does not exist at hash "301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a" in container runtime
	I0911 12:08:03.711678 2255304 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.711735 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711536 2255304 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.711823 2255304 cache_images.go:116] "registry.k8s.io/etcd:3.3.15-0" needs transfer: "registry.k8s.io/etcd:3.3.15-0" does not exist at hash "b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed" in container runtime
	I0911 12:08:03.711854 2255304 cri.go:218] Removing image: registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.711856 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711894 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711475 2255304 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.711934 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711541 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.711474 2255304 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.712005 2255304 ssh_runner.go:195] Run: which crictl
	I0911 12:08:03.823116 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.1
	I0911 12:08:03.823136 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.16.0
	I0911 12:08:03.823232 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.16.0
	I0911 12:08:03.823349 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.16.0
	I0911 12:08:03.823374 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.3.15-0
	I0911 12:08:03.823429 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.6.2
	I0911 12:08:03.823499 2255304 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.16.0
	I0911 12:08:03.957383 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1
	I0911 12:08:03.957459 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0
	I0911 12:08:03.957513 2255304 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/pause_3.1
	I0911 12:08:03.957521 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.3.15-0
	I0911 12:08:03.957564 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.16.0
	I0911 12:08:03.957649 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.6.2
	I0911 12:08:03.957707 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.16.0
	I0911 12:08:03.957743 2255304 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.16.0
	I0911 12:08:03.962841 2255304 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/pause_3.1 (exists)
	I0911 12:08:03.962863 2255304 crio.go:257] Loading image: /var/lib/minikube/images/pause_3.1
	I0911 12:08:03.962905 2255304 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/pause_3.1
	I0911 12:08:05.018464 2255304 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/pause_3.1: (1.055478429s)
	I0911 12:08:05.018510 2255304 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/pause_3.1 from cache
	I0911 12:08:05.018571 2255304 cache_images.go:92] LoadImages completed in 1.981102195s
	W0911 12:08:05.018661 2255304 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.16.0: no such file or directory
	I0911 12:08:05.018747 2255304 ssh_runner.go:195] Run: crio config
	I0911 12:08:05.107550 2255304 cni.go:84] Creating CNI manager for ""
	I0911 12:08:05.107585 2255304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:05.107614 2255304 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:08:05.107641 2255304 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.58 APIServerPort:8443 KubernetesVersion:v1.16.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-642215 NodeName:old-k8s-version-642215 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.58"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.58 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt Stati
cPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0911 12:08:05.107908 2255304 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.58
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-642215"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.58
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta1
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.58"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: old-k8s-version-642215
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      listen-metrics-urls: http://127.0.0.1:2381,http://192.168.61.58:2381
	kubernetesVersion: v1.16.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:08:05.108027 2255304 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.16.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-642215 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.61.58
	
	[Install]
	 config:
	{KubernetesVersion:v1.16.0 ClusterName:old-k8s-version-642215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:08:05.108106 2255304 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.16.0
	I0911 12:08:05.120210 2255304 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:08:05.120311 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:08:05.129517 2255304 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (429 bytes)
	I0911 12:08:05.151855 2255304 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:08:05.169543 2255304 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2177 bytes)
	I0911 12:08:05.190304 2255304 ssh_runner.go:195] Run: grep 192.168.61.58	control-plane.minikube.internal$ /etc/hosts
	I0911 12:08:05.196014 2255304 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.58	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:05.211627 2255304 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215 for IP: 192.168.61.58
	I0911 12:08:05.211663 2255304 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:05.211876 2255304 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:08:05.211943 2255304 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:08:05.212043 2255304 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/client.key
	I0911 12:08:05.212130 2255304 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.key.7152e027
	I0911 12:08:05.212217 2255304 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.key
	I0911 12:08:05.212397 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:08:05.212451 2255304 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:08:05.212467 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:08:05.212500 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:08:05.212531 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:08:05.212568 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:08:05.212637 2255304 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:05.213373 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:08:05.242362 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 12:08:05.272949 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:08:05.299359 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:08:05.326203 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:08:05.354388 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:08:05.385150 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:08:05.415683 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:08:05.449119 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:08:05.476397 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:08:05.503652 2255304 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:08:05.531520 2255304 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:08:05.550108 2255304 ssh_runner.go:195] Run: openssl version
	I0911 12:08:05.556982 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:08:05.569083 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.574490 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.574570 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:05.581479 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:08:05.596824 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:08:05.607900 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.613627 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.613711 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:08:05.620309 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:08:05.630995 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:08:05.645786 2255304 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.652682 2255304 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.652773 2255304 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:08:05.660784 2255304 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:08:05.675417 2255304 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:08:05.681969 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:08:05.690345 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:08:05.697454 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:08:05.706283 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:08:05.712913 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:08:05.719308 2255304 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:08:05.726307 2255304 kubeadm.go:404] StartCluster: {Name:old-k8s-version-642215 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.16.0 ClusterName:old-k8s-version-642215 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:08:05.726414 2255304 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:08:05.726478 2255304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:05.765092 2255304 cri.go:89] found id: ""
	I0911 12:08:05.765172 2255304 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:08:05.775654 2255304 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:08:05.775681 2255304 kubeadm.go:636] restartCluster start
	I0911 12:08:05.775749 2255304 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:08:05.785235 2255304 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:05.786289 2255304 kubeconfig.go:92] found "old-k8s-version-642215" server: "https://192.168.61.58:8443"
	I0911 12:08:05.789768 2255304 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:08:05.799009 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:05.799092 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:05.811208 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:05.811235 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:05.811301 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:05.822223 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:06.322909 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:06.323053 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:06.337866 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:06.823220 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:06.823328 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:06.839573 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:07.323145 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:07.323245 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:07.335054 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:07.822427 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:07.822536 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:07.834385 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.146768 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:11.637314 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:09.552075 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:09.552494 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:09.552520 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:09.552442 2256448 retry.go:31] will retry after 3.623277093s: waiting for machine to come up
	I0911 12:08:08.323215 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:08.323301 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:08.335501 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:08.822942 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:08.823061 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:08.840055 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.322586 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:09.322692 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:09.338101 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:09.822702 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:09.822845 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:09.835245 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:10.322666 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:10.322750 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:10.337101 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:10.822530 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:10.822662 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:10.838511 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:11.323206 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:11.323329 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:11.338239 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:11.822952 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:11.823044 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:11.838752 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:12.323296 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:12.323384 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:12.335174 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:12.822659 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:12.822775 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:12.834762 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:13.637784 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:16.138584 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:13.178553 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:13.179008 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | unable to find current IP address of domain default-k8s-diff-port-484027 in network mk-default-k8s-diff-port-484027
	I0911 12:08:13.179041 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | I0911 12:08:13.178961 2256448 retry.go:31] will retry after 3.636806595s: waiting for machine to come up
	I0911 12:08:16.818087 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.818548 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has current primary IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.818583 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Found IP for machine: 192.168.39.230
	I0911 12:08:16.818600 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Reserving static IP address...
	I0911 12:08:16.819118 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-484027", mac: "52:54:00:b1:16:75", ip: "192.168.39.230"} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.819156 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Reserved static IP address: 192.168.39.230
	I0911 12:08:16.819182 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | skip adding static IP to network mk-default-k8s-diff-port-484027 - found existing host DHCP lease matching {name: "default-k8s-diff-port-484027", mac: "52:54:00:b1:16:75", ip: "192.168.39.230"}
	I0911 12:08:16.819204 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Getting to WaitForSSH function...
	I0911 12:08:16.819221 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Waiting for SSH to be available...
	I0911 12:08:16.821746 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.822235 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.822270 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.822454 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Using SSH client type: external
	I0911 12:08:16.822500 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa (-rw-------)
	I0911 12:08:16.822551 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.230 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:08:16.822576 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | About to run SSH command:
	I0911 12:08:16.822590 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | exit 0
	I0911 12:08:16.957464 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | SSH cmd err, output: <nil>: 
	I0911 12:08:16.957845 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetConfigRaw
	I0911 12:08:16.958573 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:16.961262 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.961726 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.961762 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.962073 2255814 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/config.json ...
	I0911 12:08:16.962281 2255814 machine.go:88] provisioning docker machine ...
	I0911 12:08:16.962301 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:16.962594 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:16.962777 2255814 buildroot.go:166] provisioning hostname "default-k8s-diff-port-484027"
	I0911 12:08:16.962799 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:16.962971 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:16.965571 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.966095 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:16.966134 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:16.966313 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:16.966531 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:16.966685 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:16.966837 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:16.967021 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:16.967739 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:16.967764 2255814 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-484027 && echo "default-k8s-diff-port-484027" | sudo tee /etc/hostname
	I0911 12:08:17.106967 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-484027
	
	I0911 12:08:17.107036 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.110243 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.110663 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.110737 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.110953 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.111197 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.111388 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.111526 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.111782 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:17.112200 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:17.112223 2255814 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-484027' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-484027/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-484027' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:08:17.238410 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:08:17.238450 2255814 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:08:17.238508 2255814 buildroot.go:174] setting up certificates
	I0911 12:08:17.238520 2255814 provision.go:83] configureAuth start
	I0911 12:08:17.238536 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetMachineName
	I0911 12:08:17.238938 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:17.241635 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.242044 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.242106 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.242209 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.244737 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.245093 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.245117 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.245295 2255814 provision.go:138] copyHostCerts
	I0911 12:08:17.245360 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:08:17.245375 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:08:17.245434 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:08:17.245530 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:08:17.245537 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:08:17.245557 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:08:17.245627 2255814 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:08:17.245634 2255814 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:08:17.245651 2255814 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:08:17.245708 2255814 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-484027 san=[192.168.39.230 192.168.39.230 localhost 127.0.0.1 minikube default-k8s-diff-port-484027]
	I0911 12:08:17.540142 2255814 provision.go:172] copyRemoteCerts
	I0911 12:08:17.540233 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:08:17.540270 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.543823 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.544237 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.544277 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.544485 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.544706 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.544916 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.545060 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:17.645425 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:08:17.675288 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1257 bytes)
	I0911 12:08:17.703043 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 12:08:17.732683 2255814 provision.go:86] duration metric: configureAuth took 494.12506ms
	I0911 12:08:17.732713 2255814 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:08:17.732955 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:17.733076 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:17.736740 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.737204 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:17.737244 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:17.737464 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:17.737707 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.737914 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:17.738084 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:17.738324 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:17.738749 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:17.738774 2255814 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:08:13.323070 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:13.323174 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:13.334828 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:13.822403 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:13.822490 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:13.834374 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:14.323004 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:14.323100 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:14.334774 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:14.822351 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:14.822465 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:14.834368 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:15.323045 2255304 api_server.go:166] Checking apiserver status ...
	I0911 12:08:15.323154 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:15.334863 2255304 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:15.799700 2255304 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:08:15.799736 2255304 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:08:15.799751 2255304 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:08:15.799821 2255304 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:15.831051 2255304 cri.go:89] found id: ""
	I0911 12:08:15.831140 2255304 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:08:15.847072 2255304 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:08:15.856353 2255304 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:08:15.856425 2255304 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:15.865711 2255304 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:15.865740 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:15.990047 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.312314 2255304 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.322225408s)
	I0911 12:08:17.312354 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.521733 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.627343 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:17.723857 2255304 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:17.723964 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:17.742688 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:18.336038 2255048 start.go:369] acquired machines lock for "no-preload-352076" in 1m2.388468349s
	I0911 12:08:18.336100 2255048 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:08:18.336125 2255048 fix.go:54] fixHost starting: 
	I0911 12:08:18.336615 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:18.336663 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:18.355715 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37605
	I0911 12:08:18.356243 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:18.356901 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:08:18.356931 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:18.357385 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:18.357585 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:18.357787 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:08:18.359541 2255048 fix.go:102] recreateIfNeeded on no-preload-352076: state=Stopped err=<nil>
	I0911 12:08:18.359571 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	W0911 12:08:18.359750 2255048 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:08:18.361628 2255048 out.go:177] * Restarting existing kvm2 VM for "no-preload-352076" ...
	I0911 12:08:18.363286 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Start
	I0911 12:08:18.363532 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring networks are active...
	I0911 12:08:18.364515 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring network default is active
	I0911 12:08:18.364894 2255048 main.go:141] libmachine: (no-preload-352076) Ensuring network mk-no-preload-352076 is active
	I0911 12:08:18.365345 2255048 main.go:141] libmachine: (no-preload-352076) Getting domain xml...
	I0911 12:08:18.366191 2255048 main.go:141] libmachine: (no-preload-352076) Creating domain...
	I0911 12:08:18.078952 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:08:18.078979 2255814 machine.go:91] provisioned docker machine in 1.116684764s
	I0911 12:08:18.078991 2255814 start.go:300] post-start starting for "default-k8s-diff-port-484027" (driver="kvm2")
	I0911 12:08:18.079011 2255814 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:08:18.079057 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.079482 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:08:18.079520 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.082212 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.082641 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.082674 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.082810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.083043 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.083227 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.083403 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.170810 2255814 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:08:18.175342 2255814 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:08:18.175370 2255814 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:08:18.175457 2255814 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:08:18.175583 2255814 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:08:18.175722 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:08:18.184543 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:18.209487 2255814 start.go:303] post-start completed in 130.475291ms
	I0911 12:08:18.209516 2255814 fix.go:56] fixHost completed within 22.594854569s
	I0911 12:08:18.209540 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.212339 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.212779 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.212832 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.212967 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.213187 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.213366 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.213515 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.213680 2255814 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:18.214071 2255814 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.39.230 22 <nil> <nil>}
	I0911 12:08:18.214083 2255814 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:08:18.335862 2255814 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434098.277311369
	
	I0911 12:08:18.335893 2255814 fix.go:206] guest clock: 1694434098.277311369
	I0911 12:08:18.335902 2255814 fix.go:219] Guest: 2023-09-11 12:08:18.277311369 +0000 UTC Remote: 2023-09-11 12:08:18.20951981 +0000 UTC m=+200.212950109 (delta=67.791559ms)
	I0911 12:08:18.335925 2255814 fix.go:190] guest clock delta is within tolerance: 67.791559ms
	I0911 12:08:18.335932 2255814 start.go:83] releasing machines lock for "default-k8s-diff-port-484027", held for 22.721324127s
	I0911 12:08:18.335977 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.336342 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:18.339935 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.340372 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.340411 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.340801 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341526 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341746 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:18.341832 2255814 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:08:18.341895 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.342153 2255814 ssh_runner.go:195] Run: cat /version.json
	I0911 12:08:18.342219 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:18.345331 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.345619 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.345716 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.345751 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.346068 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.346282 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.346367 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:18.346409 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:18.346443 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.346624 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.346803 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:18.346960 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:18.347119 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:18.347284 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:18.455877 2255814 ssh_runner.go:195] Run: systemctl --version
	I0911 12:08:18.463787 2255814 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:08:18.620444 2255814 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:08:18.628878 2255814 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:08:18.628972 2255814 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:08:18.652267 2255814 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:08:18.652301 2255814 start.go:466] detecting cgroup driver to use...
	I0911 12:08:18.652381 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:08:18.672306 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:08:18.690514 2255814 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:08:18.690594 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:08:18.709032 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:08:18.727521 2255814 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:08:18.859864 2255814 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:08:19.005708 2255814 docker.go:212] disabling docker service ...
	I0911 12:08:19.005809 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:08:19.026177 2255814 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:08:19.043931 2255814 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:08:19.184060 2255814 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:08:19.305184 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:08:19.326550 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:08:19.351313 2255814 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:08:19.351400 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.366747 2255814 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:08:19.366836 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.382272 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.395743 2255814 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:19.408786 2255814 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:08:19.424229 2255814 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:08:19.438367 2255814 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:08:19.438450 2255814 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:08:19.457417 2255814 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:08:19.470001 2255814 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:08:19.629977 2255814 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:08:19.846900 2255814 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:08:19.846994 2255814 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:08:19.854282 2255814 start.go:534] Will wait 60s for crictl version
	I0911 12:08:19.854378 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:08:19.859252 2255814 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:08:19.897263 2255814 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:08:19.897349 2255814 ssh_runner.go:195] Run: crio --version
	I0911 12:08:19.966155 2255814 ssh_runner.go:195] Run: crio --version
	I0911 12:08:20.024697 2255814 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:08:18.639188 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:20.649395 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:20.026156 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetIP
	I0911 12:08:20.029726 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:20.030249 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:20.030286 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:20.030572 2255814 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0911 12:08:20.035523 2255814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:20.053903 2255814 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:08:20.053997 2255814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:20.096570 2255814 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:08:20.096666 2255814 ssh_runner.go:195] Run: which lz4
	I0911 12:08:20.102350 2255814 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:08:20.107338 2255814 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:08:20.107385 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:08:22.215033 2255814 crio.go:444] Took 2.112735 seconds to copy over tarball
	I0911 12:08:22.215168 2255814 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:08:18.262191 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:18.762029 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:19.262094 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:19.316271 2255304 api_server.go:72] duration metric: took 1.592409696s to wait for apiserver process to appear ...
	I0911 12:08:19.316309 2255304 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:19.316329 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:19.892254 2255048 main.go:141] libmachine: (no-preload-352076) Waiting to get IP...
	I0911 12:08:19.893353 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:19.893857 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:19.893939 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:19.893867 2256639 retry.go:31] will retry after 256.490953ms: waiting for machine to come up
	I0911 12:08:20.152717 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.153686 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.153718 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.153662 2256639 retry.go:31] will retry after 308.528476ms: waiting for machine to come up
	I0911 12:08:20.464569 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.465179 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.465240 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.465150 2256639 retry.go:31] will retry after 329.79495ms: waiting for machine to come up
	I0911 12:08:20.797010 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:20.797581 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:20.797615 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:20.797512 2256639 retry.go:31] will retry after 388.108578ms: waiting for machine to come up
	I0911 12:08:21.187304 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:21.187980 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:21.188006 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:21.187878 2256639 retry.go:31] will retry after 547.488463ms: waiting for machine to come up
	I0911 12:08:21.736835 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:21.737425 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:21.737466 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:21.737352 2256639 retry.go:31] will retry after 669.118316ms: waiting for machine to come up
	I0911 12:08:22.407727 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:22.408435 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:22.408471 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:22.408353 2256639 retry.go:31] will retry after 986.70059ms: waiting for machine to come up
	I0911 12:08:23.139403 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:25.141299 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:27.493149 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:25.680145 2255814 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.464917771s)
	I0911 12:08:25.680187 2255814 crio.go:451] Took 3.465097 seconds to extract the tarball
	I0911 12:08:25.680201 2255814 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:08:25.721940 2255814 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:25.770149 2255814 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:08:25.770189 2255814 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:08:25.770296 2255814 ssh_runner.go:195] Run: crio config
	I0911 12:08:25.844108 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:08:25.844142 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:25.844170 2255814 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:08:25.844197 2255814 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.230 APIServerPort:8444 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-484027 NodeName:default-k8s-diff-port-484027 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.230"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.230 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cer
ts/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:08:25.844471 2255814 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.230
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-484027"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.230
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.230"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:08:25.844584 2255814 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=default-k8s-diff-port-484027 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.230
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:}
	I0911 12:08:25.844751 2255814 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:08:25.855558 2255814 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:08:25.855658 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:08:25.865531 2255814 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (388 bytes)
	I0911 12:08:25.890631 2255814 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:08:25.914304 2255814 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2115 bytes)
	I0911 12:08:25.938065 2255814 ssh_runner.go:195] Run: grep 192.168.39.230	control-plane.minikube.internal$ /etc/hosts
	I0911 12:08:25.943138 2255814 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.230	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:25.963689 2255814 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027 for IP: 192.168.39.230
	I0911 12:08:25.963744 2255814 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:25.963968 2255814 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:08:25.964026 2255814 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:08:25.964139 2255814 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.key
	I0911 12:08:25.964245 2255814 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.key.165d62e4
	I0911 12:08:25.964309 2255814 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.key
	I0911 12:08:25.964546 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:08:25.964599 2255814 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:08:25.964618 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:08:25.964655 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:08:25.964699 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:08:25.964731 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:08:25.964805 2255814 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:25.965758 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:08:26.001391 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 12:08:26.032345 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:08:26.065593 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:08:26.100792 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:08:26.135603 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:08:26.170029 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:08:26.203119 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:08:26.232040 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:08:26.262353 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:08:26.292733 2255814 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:08:26.326750 2255814 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:08:26.346334 2255814 ssh_runner.go:195] Run: openssl version
	I0911 12:08:26.353175 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:08:26.365742 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.372007 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.372086 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:08:26.378954 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:08:26.390365 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:08:26.403147 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.410930 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.411048 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:08:26.419889 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:08:26.433366 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:08:26.445752 2255814 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.452481 2255814 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.452563 2255814 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:08:26.461097 2255814 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:08:26.477855 2255814 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:08:26.483947 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:08:26.492879 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:08:26.501391 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:08:26.510124 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:08:26.518732 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:08:26.527356 2255814 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:08:26.536063 2255814 kubeadm.go:404] StartCluster: {Name:default-k8s-diff-port-484027 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.1 ClusterName:default-k8s-diff-port-484027 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8444 NodeName:} Nodes:[{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false Extr
aDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:08:26.536225 2255814 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:08:26.536300 2255814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:26.575522 2255814 cri.go:89] found id: ""
	I0911 12:08:26.575617 2255814 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:08:26.586011 2255814 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:08:26.586043 2255814 kubeadm.go:636] restartCluster start
	I0911 12:08:26.586114 2255814 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:08:26.596758 2255814 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:26.598534 2255814 kubeconfig.go:92] found "default-k8s-diff-port-484027" server: "https://192.168.39.230:8444"
	I0911 12:08:26.603031 2255814 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:08:26.617921 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:26.618066 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:26.632719 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:26.632739 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:26.632793 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:26.650036 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:27.150299 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:27.150397 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:27.165783 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:27.650311 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:27.650416 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:27.665184 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:24.317268 2255304 api_server.go:269] stopped: https://192.168.61.58:8443/healthz: Get "https://192.168.61.58:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0911 12:08:24.317328 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:26.742901 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:26.742942 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:27.243118 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:27.654196 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0911 12:08:27.654260 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[-]poststarthook/ca-registration failed: reason withheld
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0911 12:08:27.743438 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:27.767557 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	W0911 12:08:27.767607 2255304 api_server.go:103] status: https://192.168.61.58:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/bootstrap-controller ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/ca-registration ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	healthz check failed
	I0911 12:08:28.243610 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:28.251858 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 200:
	ok
	I0911 12:08:28.262619 2255304 api_server.go:141] control plane version: v1.16.0
	I0911 12:08:28.262659 2255304 api_server.go:131] duration metric: took 8.946341912s to wait for apiserver health ...
	I0911 12:08:28.262670 2255304 cni.go:84] Creating CNI manager for ""
	I0911 12:08:28.262676 2255304 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:28.264705 2255304 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:08:23.396798 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:23.398997 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:23.399029 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:23.397251 2256639 retry.go:31] will retry after 1.384367074s: waiting for machine to come up
	I0911 12:08:24.783036 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:24.783547 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:24.783584 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:24.783489 2256639 retry.go:31] will retry after 1.172643107s: waiting for machine to come up
	I0911 12:08:25.958217 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:25.958989 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:25.959024 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:25.958929 2256639 retry.go:31] will retry after 2.243377044s: waiting for machine to come up
	I0911 12:08:28.205538 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:28.206196 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:28.206226 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:28.206137 2256639 retry.go:31] will retry after 1.83460511s: waiting for machine to come up
	I0911 12:08:28.266346 2255304 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:08:28.280404 2255304 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:08:28.308228 2255304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:28.317951 2255304 system_pods.go:59] 7 kube-system pods found
	I0911 12:08:28.317994 2255304 system_pods.go:61] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:28.318002 2255304 system_pods.go:61] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:28.318010 2255304 system_pods.go:61] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:28.318024 2255304 system_pods.go:61] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Pending
	I0911 12:08:28.318030 2255304 system_pods.go:61] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:28.318035 2255304 system_pods.go:61] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:28.318039 2255304 system_pods.go:61] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:28.318045 2255304 system_pods.go:74] duration metric: took 9.788007ms to wait for pod list to return data ...
	I0911 12:08:28.318055 2255304 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:28.323536 2255304 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:28.323578 2255304 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:28.323593 2255304 node_conditions.go:105] duration metric: took 5.532859ms to run NodePressure ...
	I0911 12:08:28.323619 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.16.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:28.927871 2255304 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:08:28.938224 2255304 kubeadm.go:787] kubelet initialised
	I0911 12:08:28.938256 2255304 kubeadm.go:788] duration metric: took 10.348938ms waiting for restarted kubelet to initialise ...
	I0911 12:08:28.938267 2255304 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:28.944405 2255304 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.951735 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.951774 2255304 pod_ready.go:81] duration metric: took 7.334386ms waiting for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.951786 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.951800 2255304 pod_ready.go:78] waiting up to 4m0s for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.964451 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "etcd-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.964487 2255304 pod_ready.go:81] duration metric: took 12.678175ms waiting for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.964499 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "etcd-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.964510 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.971472 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.971503 2255304 pod_ready.go:81] duration metric: took 6.983445ms waiting for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.971514 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.971523 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:28.978657 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.978691 2255304 pod_ready.go:81] duration metric: took 7.156987ms waiting for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:28.978704 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:28.978728 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:29.334593 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-proxy-855lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.334652 2255304 pod_ready.go:81] duration metric: took 355.905465ms waiting for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:29.334670 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-proxy-855lt" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.334683 2255304 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:29.734221 2255304 pod_ready.go:97] node "old-k8s-version-642215" hosting pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.734262 2255304 pod_ready.go:81] duration metric: took 399.567918ms waiting for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:29.734275 2255304 pod_ready.go:66] WaitExtra: waitPodCondition: node "old-k8s-version-642215" hosting pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace is currently not "Ready" (skipping!): node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:29.734287 2255304 pod_ready.go:38] duration metric: took 796.006553ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:29.734313 2255304 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:08:29.749280 2255304 ops.go:34] apiserver oom_adj: -16
	I0911 12:08:29.749313 2255304 kubeadm.go:640] restartCluster took 23.973623788s
	I0911 12:08:29.749325 2255304 kubeadm.go:406] StartCluster complete in 24.023033441s
	I0911 12:08:29.749349 2255304 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:29.749453 2255304 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:08:29.752216 2255304 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:29.752582 2255304 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.16.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:08:29.752784 2255304 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:08:29.752912 2255304 config.go:182] Loaded profile config "old-k8s-version-642215": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.16.0
	I0911 12:08:29.752947 2255304 addons.go:69] Setting storage-provisioner=true in profile "old-k8s-version-642215"
	I0911 12:08:29.752971 2255304 addons.go:231] Setting addon storage-provisioner=true in "old-k8s-version-642215"
	I0911 12:08:29.752976 2255304 addons.go:69] Setting default-storageclass=true in profile "old-k8s-version-642215"
	I0911 12:08:29.753016 2255304 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "old-k8s-version-642215"
	W0911 12:08:29.752979 2255304 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:08:29.753159 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.752984 2255304 addons.go:69] Setting metrics-server=true in profile "old-k8s-version-642215"
	I0911 12:08:29.753232 2255304 addons.go:231] Setting addon metrics-server=true in "old-k8s-version-642215"
	W0911 12:08:29.753281 2255304 addons.go:240] addon metrics-server should already be in state true
	I0911 12:08:29.753369 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.753517 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.753554 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.753599 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.753630 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.753954 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.754016 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.773524 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0911 12:08:29.773614 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41401
	I0911 12:08:29.774181 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.774418 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.774950 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.774967 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.775141 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33983
	I0911 12:08:29.775158 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.775176 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.775584 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.775585 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.775597 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.775756 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.776112 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.776144 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.776178 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.776197 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.776510 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.776970 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.776990 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.790443 2255304 kapi.go:248] "coredns" deployment in "kube-system" namespace and "old-k8s-version-642215" context rescaled to 1 replicas
	I0911 12:08:29.790502 2255304 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.61.58 Port:8443 KubernetesVersion:v1.16.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:08:29.793918 2255304 out.go:177] * Verifying Kubernetes components...
	I0911 12:08:29.796131 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:29.798116 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38829
	I0911 12:08:29.798581 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.799554 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.799580 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.800105 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.800439 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.802764 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.805061 2255304 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:29.803246 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42071
	I0911 12:08:29.807001 2255304 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:29.807025 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:08:29.807053 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.807866 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.807924 2255304 addons.go:231] Setting addon default-storageclass=true in "old-k8s-version-642215"
	W0911 12:08:29.807949 2255304 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:08:29.807985 2255304 host.go:66] Checking if "old-k8s-version-642215" exists ...
	I0911 12:08:29.808406 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.808442 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.809644 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.809667 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.817010 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.817046 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.817101 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.817131 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.817158 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.817555 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.817625 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.817868 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.818244 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:29.820240 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.822846 2255304 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:08:29.824505 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:08:29.824526 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:08:29.824554 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.827924 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.828359 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.828396 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.828684 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.828950 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.829099 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.829285 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:29.830900 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45365
	I0911 12:08:29.831463 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.832028 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.832049 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.832646 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.833261 2255304 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:29.833313 2255304 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:29.868600 2255304 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40435
	I0911 12:08:29.869171 2255304 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:29.869822 2255304 main.go:141] libmachine: Using API Version  1
	I0911 12:08:29.869842 2255304 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:29.870236 2255304 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:29.870416 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetState
	I0911 12:08:29.872928 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .DriverName
	I0911 12:08:29.873212 2255304 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:29.873232 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:08:29.873255 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHHostname
	I0911 12:08:29.876313 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.876963 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:4e:60:8b", ip: ""} in network mk-old-k8s-version-642215: {Iface:virbr3 ExpiryTime:2023-09-11 13:07:45 +0000 UTC Type:0 Mac:52:54:00:4e:60:8b Iaid: IPaddr:192.168.61.58 Prefix:24 Hostname:old-k8s-version-642215 Clientid:01:52:54:00:4e:60:8b}
	I0911 12:08:29.876983 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHPort
	I0911 12:08:29.876999 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | domain old-k8s-version-642215 has defined IP address 192.168.61.58 and MAC address 52:54:00:4e:60:8b in network mk-old-k8s-version-642215
	I0911 12:08:29.877168 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHKeyPath
	I0911 12:08:29.877331 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .GetSSHUsername
	I0911 12:08:29.877468 2255304 sshutil.go:53] new ssh client: &{IP:192.168.61.58 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/old-k8s-version-642215/id_rsa Username:docker}
	I0911 12:08:30.019745 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:30.061364 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:08:30.061393 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:08:30.080562 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:30.100494 2255304 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 12:08:30.100511 2255304 node_ready.go:35] waiting up to 6m0s for node "old-k8s-version-642215" to be "Ready" ...
	I0911 12:08:30.120618 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:08:30.120647 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:08:30.173391 2255304 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:30.173427 2255304 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:08:30.208772 2255304 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.16.0/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:30.757802 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.757841 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.757982 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758021 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758294 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.758334 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758344 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758353 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758366 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758377 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.758620 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758646 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758660 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758677 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758690 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.758701 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.758717 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758743 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.758943 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.758954 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.759016 2255304 main.go:141] libmachine: (old-k8s-version-642215) DBG | Closing plugin on server side
	I0911 12:08:30.759052 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.759062 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.859384 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.859426 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.859828 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.859853 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.859864 2255304 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:30.859874 2255304 main.go:141] libmachine: (old-k8s-version-642215) Calling .Close
	I0911 12:08:30.860302 2255304 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:30.860336 2255304 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:30.860357 2255304 addons.go:467] Verifying addon metrics-server=true in "old-k8s-version-642215"
	I0911 12:08:30.862687 2255304 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0911 12:08:29.637791 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:31.639796 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:28.150174 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:28.150294 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:28.166331 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:28.650905 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:28.650996 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:28.664146 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:29.150646 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:29.150745 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:29.166569 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:29.651031 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:29.651129 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:29.664106 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.150429 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:30.150535 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:30.167297 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.650364 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:30.650458 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:30.664180 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:31.150419 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:31.150521 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:31.168242 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:31.650834 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:31.650922 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:31.664772 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:32.150232 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:32.150362 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:32.163224 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:32.650676 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:32.650773 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:32.667077 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:30.864433 2255304 addons.go:502] enable addons completed in 1.111642638s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0911 12:08:32.139191 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:30.042388 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:30.043026 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:30.043054 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:30.042967 2256639 retry.go:31] will retry after 3.282840664s: waiting for machine to come up
	I0911 12:08:33.327456 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:33.328007 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:33.328066 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:33.327941 2256639 retry.go:31] will retry after 4.185053881s: waiting for machine to come up
	I0911 12:08:33.639996 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:36.139377 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:33.150668 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:33.150797 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:33.163178 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:33.650733 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:33.650845 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:33.666475 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:34.150939 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:34.151037 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:34.163985 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:34.650139 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:34.650250 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:34.664850 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:35.150224 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:35.150351 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:35.169894 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:35.650946 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:35.651044 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:35.665438 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:36.151019 2255814 api_server.go:166] Checking apiserver status ...
	I0911 12:08:36.151134 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:08:36.164843 2255814 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:08:36.618412 2255814 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:08:36.618460 2255814 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:08:36.618482 2255814 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:08:36.618571 2255814 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:08:36.657264 2255814 cri.go:89] found id: ""
	I0911 12:08:36.657366 2255814 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:08:36.676222 2255814 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:08:36.686832 2255814 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:08:36.686923 2255814 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:36.699618 2255814 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:08:36.699654 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:36.842821 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.471899 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.699214 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.784721 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:37.870994 2255814 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:37.871085 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:37.894561 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:34.638777 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:37.138575 2255304 node_ready.go:58] node "old-k8s-version-642215" has status "Ready":"False"
	I0911 12:08:37.515376 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:37.515955 2255048 main.go:141] libmachine: (no-preload-352076) DBG | unable to find current IP address of domain no-preload-352076 in network mk-no-preload-352076
	I0911 12:08:37.515989 2255048 main.go:141] libmachine: (no-preload-352076) DBG | I0911 12:08:37.515896 2256639 retry.go:31] will retry after 3.472863196s: waiting for machine to come up
	I0911 12:08:38.138433 2255304 node_ready.go:49] node "old-k8s-version-642215" has status "Ready":"True"
	I0911 12:08:38.138464 2255304 node_ready.go:38] duration metric: took 8.037923512s waiting for node "old-k8s-version-642215" to be "Ready" ...
	I0911 12:08:38.138475 2255304 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:38.143177 2255304 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.664411 2255304 pod_ready.go:92] pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.664449 2255304 pod_ready.go:81] duration metric: took 521.244524ms waiting for pod "coredns-5644d7b6d9-55m96" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.664463 2255304 pod_ready.go:78] waiting up to 6m0s for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.670838 2255304 pod_ready.go:92] pod "etcd-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.670876 2255304 pod_ready.go:81] duration metric: took 6.404356ms waiting for pod "etcd-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.670890 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.679254 2255304 pod_ready.go:92] pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.679284 2255304 pod_ready.go:81] duration metric: took 8.385069ms waiting for pod "kube-apiserver-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.679299 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.939484 2255304 pod_ready.go:92] pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:38.939514 2255304 pod_ready.go:81] duration metric: took 260.206232ms waiting for pod "kube-controller-manager-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:38.939529 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.337858 2255304 pod_ready.go:92] pod "kube-proxy-855lt" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:39.337894 2255304 pod_ready.go:81] duration metric: took 398.358394ms waiting for pod "kube-proxy-855lt" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.337907 2255304 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.738437 2255304 pod_ready.go:92] pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:39.738465 2255304 pod_ready.go:81] duration metric: took 400.549428ms waiting for pod "kube-scheduler-old-k8s-version-642215" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:39.738479 2255304 pod_ready.go:38] duration metric: took 1.599991385s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:39.738509 2255304 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:08:39.738569 2255304 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.760727 2255304 api_server.go:72] duration metric: took 9.970181642s to wait for apiserver process to appear ...
	I0911 12:08:39.760774 2255304 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:39.760797 2255304 api_server.go:253] Checking apiserver healthz at https://192.168.61.58:8443/healthz ...
	I0911 12:08:39.768195 2255304 api_server.go:279] https://192.168.61.58:8443/healthz returned 200:
	ok
	I0911 12:08:39.769416 2255304 api_server.go:141] control plane version: v1.16.0
	I0911 12:08:39.769442 2255304 api_server.go:131] duration metric: took 8.658497ms to wait for apiserver health ...
	I0911 12:08:39.769457 2255304 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:39.940647 2255304 system_pods.go:59] 7 kube-system pods found
	I0911 12:08:39.940683 2255304 system_pods.go:61] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:39.940701 2255304 system_pods.go:61] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:39.940708 2255304 system_pods.go:61] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:39.940715 2255304 system_pods.go:61] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Running
	I0911 12:08:39.940722 2255304 system_pods.go:61] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:39.940729 2255304 system_pods.go:61] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:39.940736 2255304 system_pods.go:61] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:39.940747 2255304 system_pods.go:74] duration metric: took 171.283587ms to wait for pod list to return data ...
	I0911 12:08:39.940763 2255304 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:08:40.139718 2255304 default_sa.go:45] found service account: "default"
	I0911 12:08:40.139751 2255304 default_sa.go:55] duration metric: took 198.981243ms for default service account to be created ...
	I0911 12:08:40.139763 2255304 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:08:40.340959 2255304 system_pods.go:86] 7 kube-system pods found
	I0911 12:08:40.340998 2255304 system_pods.go:89] "coredns-5644d7b6d9-55m96" [5d921d6f-960e-4606-9b0f-9c53eca5f2a2] Running
	I0911 12:08:40.341008 2255304 system_pods.go:89] "etcd-old-k8s-version-642215" [651b3fe4-d1bd-4f56-8ead-085675ebe780] Running
	I0911 12:08:40.341015 2255304 system_pods.go:89] "kube-apiserver-old-k8s-version-642215" [5cd9fa47-2078-4188-93a7-8c635d00ecaa] Running
	I0911 12:08:40.341028 2255304 system_pods.go:89] "kube-controller-manager-old-k8s-version-642215" [4758c74e-2518-4a41-ac4c-07304be73c5d] Running
	I0911 12:08:40.341035 2255304 system_pods.go:89] "kube-proxy-855lt" [1a95a90c-09bc-46e0-a535-232c2edb964e] Running
	I0911 12:08:40.341042 2255304 system_pods.go:89] "kube-scheduler-old-k8s-version-642215" [6f509fcd-b96d-4f1e-b1a8-5c9195aa42eb] Running
	I0911 12:08:40.341051 2255304 system_pods.go:89] "storage-provisioner" [f278d62d-eed6-47d4-9a76-388b47b929ec] Running
	I0911 12:08:40.341061 2255304 system_pods.go:126] duration metric: took 201.290886ms to wait for k8s-apps to be running ...
	I0911 12:08:40.341073 2255304 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:08:40.341163 2255304 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:40.359994 2255304 system_svc.go:56] duration metric: took 18.903474ms WaitForService to wait for kubelet.
	I0911 12:08:40.360036 2255304 kubeadm.go:581] duration metric: took 10.569498287s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:08:40.360063 2255304 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:40.538713 2255304 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:40.538748 2255304 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:40.538762 2255304 node_conditions.go:105] duration metric: took 178.692637ms to run NodePressure ...
	I0911 12:08:40.538778 2255304 start.go:228] waiting for startup goroutines ...
	I0911 12:08:40.538785 2255304 start.go:233] waiting for cluster config update ...
	I0911 12:08:40.538798 2255304 start.go:242] writing updated cluster config ...
	I0911 12:08:40.539175 2255304 ssh_runner.go:195] Run: rm -f paused
	I0911 12:08:40.601745 2255304 start.go:600] kubectl: 1.28.1, cluster: 1.16.0 (minor skew: 12)
	I0911 12:08:40.604230 2255304 out.go:177] 
	W0911 12:08:40.606184 2255304 out.go:239] ! /usr/local/bin/kubectl is version 1.28.1, which may have incompatibilities with Kubernetes 1.16.0.
	I0911 12:08:40.607933 2255304 out.go:177]   - Want kubectl v1.16.0? Try 'minikube kubectl -- get pods -A'
	I0911 12:08:40.609870 2255304 out.go:177] * Done! kubectl is now configured to use "old-k8s-version-642215" cluster and "default" namespace by default
	I0911 12:08:38.638441 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:40.639280 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:38.411419 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:38.910721 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.410710 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:39.911432 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:40.411115 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:08:40.438764 2255814 api_server.go:72] duration metric: took 2.567766062s to wait for apiserver process to appear ...
	I0911 12:08:40.438803 2255814 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:08:40.438828 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.439582 2255814 api_server.go:269] stopped: https://192.168.39.230:8444/healthz: Get "https://192.168.39.230:8444/healthz": dial tcp 192.168.39.230:8444: connect: connection refused
	I0911 12:08:40.439644 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.440098 2255814 api_server.go:269] stopped: https://192.168.39.230:8444/healthz: Get "https://192.168.39.230:8444/healthz": dial tcp 192.168.39.230:8444: connect: connection refused
	I0911 12:08:40.940202 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:40.989968 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.990485 2255048 main.go:141] libmachine: (no-preload-352076) Found IP for machine: 192.168.72.157
	I0911 12:08:40.990519 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has current primary IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.990530 2255048 main.go:141] libmachine: (no-preload-352076) Reserving static IP address...
	I0911 12:08:40.990910 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "no-preload-352076", mac: "52:54:00:91:89:e0", ip: "192.168.72.157"} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:40.990942 2255048 main.go:141] libmachine: (no-preload-352076) Reserved static IP address: 192.168.72.157
	I0911 12:08:40.991004 2255048 main.go:141] libmachine: (no-preload-352076) Waiting for SSH to be available...
	I0911 12:08:40.991024 2255048 main.go:141] libmachine: (no-preload-352076) DBG | skip adding static IP to network mk-no-preload-352076 - found existing host DHCP lease matching {name: "no-preload-352076", mac: "52:54:00:91:89:e0", ip: "192.168.72.157"}
	I0911 12:08:40.991044 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Getting to WaitForSSH function...
	I0911 12:08:40.994061 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.994417 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:40.994478 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:40.994612 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Using SSH client type: external
	I0911 12:08:40.994653 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa (-rw-------)
	I0911 12:08:40.994693 2255048 main.go:141] libmachine: (no-preload-352076) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.157 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:08:40.994711 2255048 main.go:141] libmachine: (no-preload-352076) DBG | About to run SSH command:
	I0911 12:08:40.994725 2255048 main.go:141] libmachine: (no-preload-352076) DBG | exit 0
	I0911 12:08:41.093865 2255048 main.go:141] libmachine: (no-preload-352076) DBG | SSH cmd err, output: <nil>: 
	I0911 12:08:41.094360 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetConfigRaw
	I0911 12:08:41.095142 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:41.098534 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.098944 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.098985 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.099319 2255048 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/config.json ...
	I0911 12:08:41.099667 2255048 machine.go:88] provisioning docker machine ...
	I0911 12:08:41.099711 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:41.100079 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.100503 2255048 buildroot.go:166] provisioning hostname "no-preload-352076"
	I0911 12:08:41.100535 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.100868 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.104253 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.104920 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.105102 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.105420 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.105864 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.106201 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.106627 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.106937 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.107432 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.107447 2255048 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-352076 && echo "no-preload-352076" | sudo tee /etc/hostname
	I0911 12:08:41.249885 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-352076
	
	I0911 12:08:41.249919 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.253419 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.253854 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.253892 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.254125 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.254373 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.254576 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.254752 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.254945 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.255592 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.255624 2255048 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-352076' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-352076/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-352076' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:08:41.394308 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:08:41.394348 2255048 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:08:41.394378 2255048 buildroot.go:174] setting up certificates
	I0911 12:08:41.394388 2255048 provision.go:83] configureAuth start
	I0911 12:08:41.394401 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetMachineName
	I0911 12:08:41.394737 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:41.398042 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.398506 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.398540 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.398747 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.401368 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.401743 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.401797 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.401939 2255048 provision.go:138] copyHostCerts
	I0911 12:08:41.402020 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:08:41.402034 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:08:41.402102 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:08:41.402226 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:08:41.402238 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:08:41.402278 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:08:41.402374 2255048 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:08:41.402386 2255048 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:08:41.402413 2255048 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:08:41.402501 2255048 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.no-preload-352076 san=[192.168.72.157 192.168.72.157 localhost 127.0.0.1 minikube no-preload-352076]
	I0911 12:08:41.717751 2255048 provision.go:172] copyRemoteCerts
	I0911 12:08:41.717828 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:08:41.717865 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.721152 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.721457 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.721499 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.721720 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.721943 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.722111 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.722261 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:41.818932 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 12:08:41.846852 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:08:41.875977 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0911 12:08:41.905364 2255048 provision.go:86] duration metric: configureAuth took 510.946609ms
	I0911 12:08:41.905401 2255048 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:08:41.905662 2255048 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:41.905762 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:41.909182 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.909656 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:41.909725 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:41.909913 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:41.910149 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.910342 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:41.910487 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:41.910649 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:41.911134 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:41.911154 2255048 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:08:42.260214 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:08:42.260254 2255048 machine.go:91] provisioned docker machine in 1.16057097s
	I0911 12:08:42.260268 2255048 start.go:300] post-start starting for "no-preload-352076" (driver="kvm2")
	I0911 12:08:42.260283 2255048 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:08:42.260307 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.260700 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:08:42.260738 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.263782 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.264157 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.264197 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.264358 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.264595 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.264808 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.265010 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.356470 2255048 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:08:42.361886 2255048 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:08:42.361921 2255048 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:08:42.362004 2255048 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:08:42.362082 2255048 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:08:42.362196 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:08:42.372005 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:08:42.400800 2255048 start.go:303] post-start completed in 140.51468ms
	I0911 12:08:42.400850 2255048 fix.go:56] fixHost completed within 24.064734762s
	I0911 12:08:42.400882 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.404351 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.404799 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.404862 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.405055 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.405297 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.405484 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.405644 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.405859 2255048 main.go:141] libmachine: Using SSH client type: native
	I0911 12:08:42.406477 2255048 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.157 22 <nil> <nil>}
	I0911 12:08:42.406505 2255048 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:08:42.529978 2255048 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694434122.467205529
	
	I0911 12:08:42.530008 2255048 fix.go:206] guest clock: 1694434122.467205529
	I0911 12:08:42.530020 2255048 fix.go:219] Guest: 2023-09-11 12:08:42.467205529 +0000 UTC Remote: 2023-09-11 12:08:42.400855668 +0000 UTC m=+369.043734805 (delta=66.349861ms)
	I0911 12:08:42.530049 2255048 fix.go:190] guest clock delta is within tolerance: 66.349861ms
	I0911 12:08:42.530062 2255048 start.go:83] releasing machines lock for "no-preload-352076", held for 24.19398788s
	I0911 12:08:42.530094 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.530397 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:42.533425 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.533777 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.533809 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.534032 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534670 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534881 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:08:42.534986 2255048 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:08:42.535048 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.535193 2255048 ssh_runner.go:195] Run: cat /version.json
	I0911 12:08:42.535235 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:08:42.538009 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538210 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538356 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.538386 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538551 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.538630 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:42.538658 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:42.538748 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.538862 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:08:42.538939 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.539033 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:08:42.539212 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:08:42.539226 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.539373 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:08:42.659315 2255048 ssh_runner.go:195] Run: systemctl --version
	I0911 12:08:42.666117 2255048 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:08:42.827592 2255048 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:08:42.834283 2255048 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:08:42.834379 2255048 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:08:42.855077 2255048 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:08:42.855107 2255048 start.go:466] detecting cgroup driver to use...
	I0911 12:08:42.855187 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:08:42.871553 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:08:42.886253 2255048 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:08:42.886341 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:08:42.902211 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:08:42.917991 2255048 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:08:43.043679 2255048 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:08:43.182633 2255048 docker.go:212] disabling docker service ...
	I0911 12:08:43.182709 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:08:43.200269 2255048 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:08:43.216232 2255048 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:08:43.338376 2255048 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:08:43.460730 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:08:43.478083 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:08:43.499948 2255048 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:08:43.500018 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.513007 2255048 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:08:43.513098 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.526435 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.539976 2255048 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:08:43.553967 2255048 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:08:43.568765 2255048 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:08:43.580392 2255048 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:08:43.580481 2255048 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:08:43.599784 2255048 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:08:43.612160 2255048 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:08:43.725608 2255048 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:08:43.930261 2255048 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:08:43.930390 2255048 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:08:43.937749 2255048 start.go:534] Will wait 60s for crictl version
	I0911 12:08:43.937875 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:43.942818 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:08:43.986093 2255048 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:08:43.986210 2255048 ssh_runner.go:195] Run: crio --version
	I0911 12:08:44.042887 2255048 ssh_runner.go:195] Run: crio --version
	I0911 12:08:44.106673 2255048 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:08:45.592797 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:45.592855 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:45.592874 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:45.637810 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:08:45.637846 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:08:45.940997 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:45.947826 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:08:45.947867 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:08:46.440462 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:46.449732 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:08:46.449772 2255814 api_server.go:103] status: https://192.168.39.230:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:08:46.940777 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:08:46.946988 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 200:
	ok
	I0911 12:08:46.957787 2255814 api_server.go:141] control plane version: v1.28.1
	I0911 12:08:46.957832 2255814 api_server.go:131] duration metric: took 6.519019358s to wait for apiserver health ...
	I0911 12:08:46.957845 2255814 cni.go:84] Creating CNI manager for ""
	I0911 12:08:46.957854 2255814 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:08:46.960358 2255814 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:08:43.138628 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:45.640990 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:46.962120 2255814 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:08:46.987804 2255814 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:08:47.021845 2255814 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:08:47.042508 2255814 system_pods.go:59] 8 kube-system pods found
	I0911 12:08:47.042560 2255814 system_pods.go:61] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:08:47.042575 2255814 system_pods.go:61] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:08:47.042585 2255814 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:08:47.042600 2255814 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:08:47.042612 2255814 system_pods.go:61] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:08:47.042624 2255814 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:08:47.042641 2255814 system_pods.go:61] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:08:47.042652 2255814 system_pods.go:61] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:08:47.042663 2255814 system_pods.go:74] duration metric: took 20.787272ms to wait for pod list to return data ...
	I0911 12:08:47.042677 2255814 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:08:47.048412 2255814 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:08:47.048524 2255814 node_conditions.go:123] node cpu capacity is 2
	I0911 12:08:47.048547 2255814 node_conditions.go:105] duration metric: took 5.861231ms to run NodePressure ...
	I0911 12:08:47.048595 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:08:47.550933 2255814 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:08:47.556511 2255814 kubeadm.go:787] kubelet initialised
	I0911 12:08:47.556543 2255814 kubeadm.go:788] duration metric: took 5.579487ms waiting for restarted kubelet to initialise ...
	I0911 12:08:47.556554 2255814 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:47.563694 2255814 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.569943 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.569975 2255814 pod_ready.go:81] duration metric: took 6.244443ms waiting for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.569986 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.570001 2255814 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.576703 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.576777 2255814 pod_ready.go:81] duration metric: took 6.7656ms waiting for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.576791 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.576805 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.587740 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.587788 2255814 pod_ready.go:81] duration metric: took 10.95451ms waiting for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.587813 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.587833 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.596430 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.596468 2255814 pod_ready.go:81] duration metric: took 8.617854ms waiting for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.596481 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.596492 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:47.956009 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-proxy-ldgjr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.956047 2255814 pod_ready.go:81] duration metric: took 359.546333ms waiting for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:47.956060 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-proxy-ldgjr" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:47.956078 2255814 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:44.108577 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetIP
	I0911 12:08:44.112208 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:44.112736 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:08:44.112782 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:08:44.113074 2255048 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0911 12:08:44.119517 2255048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:08:44.140305 2255048 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:08:44.140398 2255048 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:08:44.184487 2255048 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:08:44.184529 2255048 cache_images.go:88] LoadImages start: [registry.k8s.io/kube-apiserver:v1.28.1 registry.k8s.io/kube-controller-manager:v1.28.1 registry.k8s.io/kube-scheduler:v1.28.1 registry.k8s.io/kube-proxy:v1.28.1 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.9-0 registry.k8s.io/coredns/coredns:v1.10.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0911 12:08:44.184600 2255048 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.184910 2255048 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.185117 2255048 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.185240 2255048 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.185366 2255048 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.185790 2255048 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.185987 2255048 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0911 12:08:44.186471 2255048 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.186856 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.186943 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.187105 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.187306 2255048 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.9-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.187523 2255048 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.28.1: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.187570 2255048 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0911 12:08:44.188031 2255048 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.10.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.188698 2255048 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.350727 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.351429 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.353625 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.356576 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.374129 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0911 12:08:44.385524 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.410764 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.472311 2255048 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.10.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.10.1" does not exist at hash "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc" in container runtime
	I0911 12:08:44.472382 2255048 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.472453 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.572121 2255048 cache_images.go:116] "registry.k8s.io/etcd:3.5.9-0" needs transfer: "registry.k8s.io/etcd:3.5.9-0" does not exist at hash "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9" in container runtime
	I0911 12:08:44.572186 2255048 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.572258 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.589426 2255048 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.28.1" needs transfer: "registry.k8s.io/kube-scheduler:v1.28.1" does not exist at hash "b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a" in container runtime
	I0911 12:08:44.589558 2255048 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.589492 2255048 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.28.1" needs transfer: "registry.k8s.io/kube-apiserver:v1.28.1" does not exist at hash "5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77" in container runtime
	I0911 12:08:44.589638 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.589643 2255048 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.589692 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691568 2255048 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.28.1" needs transfer: "registry.k8s.io/kube-controller-manager:v1.28.1" does not exist at hash "821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac" in container runtime
	I0911 12:08:44.691627 2255048 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.691657 2255048 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.28.1" needs transfer: "registry.k8s.io/kube-proxy:v1.28.1" does not exist at hash "6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5" in container runtime
	I0911 12:08:44.691734 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.10.1
	I0911 12:08:44.691767 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.9-0
	I0911 12:08:44.691749 2255048 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.691816 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691705 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.691943 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.28.1
	I0911 12:08:44.691955 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.28.1
	I0911 12:08:44.729362 2255048 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.778025 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0
	I0911 12:08:44.778152 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1
	I0911 12:08:44.778215 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:44.778280 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.799788 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1
	I0911 12:08:44.799952 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:08:44.799997 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.28.1
	I0911 12:08:44.800112 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.28.1
	I0911 12:08:44.800183 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1
	I0911 12:08:44.800283 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:44.851138 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.10.1 (exists)
	I0911 12:08:44.851174 2255048 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.851192 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.9-0 (exists)
	I0911 12:08:44.851227 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.28.1 (exists)
	I0911 12:08:44.851239 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1
	I0911 12:08:44.851141 2255048 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0911 12:08:44.851363 2255048 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:44.851430 2255048 ssh_runner.go:195] Run: which crictl
	I0911 12:08:44.896214 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1
	I0911 12:08:44.896261 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1
	I0911 12:08:44.896310 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.28.1 (exists)
	I0911 12:08:44.896376 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:44.896377 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:08:46.231671 2255048 ssh_runner.go:235] Completed: which crictl: (1.380174214s)
	I0911 12:08:46.231732 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.28.1: (1.33531707s)
	I0911 12:08:46.231734 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.10.1: (1.38044194s)
	I0911 12:08:46.231760 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.10.1 from cache
	I0911 12:08:46.231767 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.28.1 (exists)
	I0911 12:08:46.231780 2255048 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:46.231781 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:46.231821 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1
	I0911 12:08:46.231777 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.28.1: (1.335378451s)
	I0911 12:08:46.231904 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.28.1 (exists)
	I0911 12:08:48.356501 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.356547 2255814 pod_ready.go:81] duration metric: took 400.453753ms waiting for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:48.356563 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.356575 2255814 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:48.756718 2255814 pod_ready.go:97] node "default-k8s-diff-port-484027" hosting pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.756761 2255814 pod_ready.go:81] duration metric: took 400.17438ms waiting for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	E0911 12:08:48.756775 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-484027" hosting pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:48.756786 2255814 pod_ready.go:38] duration metric: took 1.200219314s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:48.756806 2255814 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:08:48.775561 2255814 ops.go:34] apiserver oom_adj: -16
	I0911 12:08:48.775587 2255814 kubeadm.go:640] restartCluster took 22.189536767s
	I0911 12:08:48.775598 2255814 kubeadm.go:406] StartCluster complete in 22.23955062s
	I0911 12:08:48.775621 2255814 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:48.775713 2255814 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:08:48.778091 2255814 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:08:48.778397 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:08:48.778424 2255814 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:08:48.778566 2255814 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.778597 2255814 addons.go:231] Setting addon storage-provisioner=true in "default-k8s-diff-port-484027"
	W0911 12:08:48.778614 2255814 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:08:48.778648 2255814 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:08:48.778696 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.778718 2255814 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.778734 2255814 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-484027"
	I0911 12:08:48.779141 2255814 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-484027"
	I0911 12:08:48.779145 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779159 2255814 addons.go:231] Setting addon metrics-server=true in "default-k8s-diff-port-484027"
	W0911 12:08:48.779167 2255814 addons.go:240] addon metrics-server should already be in state true
	I0911 12:08:48.779173 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.779207 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.779289 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779343 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.779509 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.779556 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.786929 2255814 kapi.go:248] "coredns" deployment in "kube-system" namespace and "default-k8s-diff-port-484027" context rescaled to 1 replicas
	I0911 12:08:48.786996 2255814 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.39.230 Port:8444 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:08:48.789204 2255814 out.go:177] * Verifying Kubernetes components...
	I0911 12:08:48.790973 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:08:48.799774 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35627
	I0911 12:08:48.800366 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.800462 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43291
	I0911 12:08:48.801065 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.801286 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.801312 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.802064 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.802091 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.802105 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41463
	I0911 12:08:48.802166 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.802495 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.802706 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.802842 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.802873 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.802873 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.803804 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.803827 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.804437 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.805108 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.805156 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.823113 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41879
	I0911 12:08:48.823705 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.824347 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.824378 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.824848 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.825073 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.827337 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.827355 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43649
	I0911 12:08:48.830403 2255814 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:08:48.827726 2255814 addons.go:231] Setting addon default-storageclass=true in "default-k8s-diff-port-484027"
	I0911 12:08:48.828116 2255814 main.go:141] libmachine: () Calling .GetVersion
	W0911 12:08:48.832240 2255814 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:08:48.832297 2255814 host.go:66] Checking if "default-k8s-diff-port-484027" exists ...
	I0911 12:08:48.832351 2255814 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:48.832372 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:08:48.832404 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.832767 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.832846 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.833819 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.833843 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.834348 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.834583 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.836499 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.837953 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.838586 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.838616 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.838662 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.838863 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.839009 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.839383 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.848085 2255814 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:08:48.850041 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:08:48.850077 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:08:48.850117 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.853766 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.854286 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.854324 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.854695 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.855024 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.855222 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.855427 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.857253 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46435
	I0911 12:08:48.858013 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.858572 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.858593 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.858922 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.859424 2255814 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:08:48.859461 2255814 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:08:48.877066 2255814 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33371
	I0911 12:08:48.877762 2255814 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:08:48.878430 2255814 main.go:141] libmachine: Using API Version  1
	I0911 12:08:48.878451 2255814 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:08:48.878986 2255814 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:08:48.879214 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetState
	I0911 12:08:48.881452 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .DriverName
	I0911 12:08:48.881771 2255814 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:48.881790 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:08:48.881810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHHostname
	I0911 12:08:48.885826 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.886380 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b1:16:75", ip: ""} in network mk-default-k8s-diff-port-484027: {Iface:virbr1 ExpiryTime:2023-09-11 13:01:32 +0000 UTC Type:0 Mac:52:54:00:b1:16:75 Iaid: IPaddr:192.168.39.230 Prefix:24 Hostname:default-k8s-diff-port-484027 Clientid:01:52:54:00:b1:16:75}
	I0911 12:08:48.886406 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | domain default-k8s-diff-port-484027 has defined IP address 192.168.39.230 and MAC address 52:54:00:b1:16:75 in network mk-default-k8s-diff-port-484027
	I0911 12:08:48.887000 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHPort
	I0911 12:08:48.887261 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHKeyPath
	I0911 12:08:48.887456 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .GetSSHUsername
	I0911 12:08:48.887604 2255814 sshutil.go:53] new ssh client: &{IP:192.168.39.230 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/default-k8s-diff-port-484027/id_rsa Username:docker}
	I0911 12:08:48.990643 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:08:49.087344 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:08:49.087379 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:08:49.088448 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:08:49.172284 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:08:49.172325 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:08:49.284010 2255814 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-484027" to be "Ready" ...
	I0911 12:08:49.284396 2255814 start.go:874] CoreDNS already contains "host.minikube.internal" host record, skipping...
	I0911 12:08:49.296054 2255814 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:49.296086 2255814 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:08:49.379706 2255814 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:08:51.018731 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.028036666s)
	I0911 12:08:51.018796 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.018810 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.018733 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.930229373s)
	I0911 12:08:51.018900 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.018920 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.019201 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.019252 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.019291 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.019304 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.019315 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.019325 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.019420 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.019433 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.019445 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.019457 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.021142 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.021184 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.021199 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021204 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021223 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021223 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021238 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.021259 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.021542 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.021615 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.021683 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.122492 2255814 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.742646501s)
	I0911 12:08:51.122563 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.122582 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.123183 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.123183 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.123214 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.123224 2255814 main.go:141] libmachine: Making call to close driver server
	I0911 12:08:51.123232 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) Calling .Close
	I0911 12:08:51.123668 2255814 main.go:141] libmachine: (default-k8s-diff-port-484027) DBG | Closing plugin on server side
	I0911 12:08:51.123713 2255814 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:08:51.123729 2255814 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:08:51.123743 2255814 addons.go:467] Verifying addon metrics-server=true in "default-k8s-diff-port-484027"
	I0911 12:08:51.126333 2255814 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0911 12:08:48.273682 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:50.640588 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:51.128042 2255814 addons.go:502] enable addons completed in 2.34962006s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0911 12:08:51.299348 2255814 node_ready.go:58] node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:49.857883 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.28.1: (3.62602487s)
	I0911 12:08:49.857920 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.28.1 from cache
	I0911 12:08:49.857935 2255048 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:49.858008 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0
	I0911 12:08:49.858007 2255048 ssh_runner.go:235] Completed: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5: (3.626200516s)
	I0911 12:08:49.858128 2255048 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0911 12:08:49.858215 2255048 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:08:53.140732 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:55.639106 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:53.799851 2255814 node_ready.go:58] node "default-k8s-diff-port-484027" has status "Ready":"False"
	I0911 12:08:56.661585 2255814 node_ready.go:49] node "default-k8s-diff-port-484027" has status "Ready":"True"
	I0911 12:08:56.661621 2255814 node_ready.go:38] duration metric: took 7.377564832s waiting for node "default-k8s-diff-port-484027" to be "Ready" ...
	I0911 12:08:56.661651 2255814 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:08:56.675600 2255814 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.686880 2255814 pod_ready.go:92] pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:56.686977 2255814 pod_ready.go:81] duration metric: took 11.34453ms waiting for pod "coredns-5dd5756b68-xszs4" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.687027 2255814 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.695897 2255814 pod_ready.go:92] pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:56.695991 2255814 pod_ready.go:81] duration metric: took 8.931143ms waiting for pod "etcd-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:56.696011 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:57.305638 2255048 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5: (7.447392742s)
	I0911 12:08:57.305689 2255048 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0911 12:08:57.305809 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.9-0: (7.447768556s)
	I0911 12:08:57.305836 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.9-0 from cache
	I0911 12:08:57.305855 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:57.305932 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1
	I0911 12:08:58.142333 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:00.644281 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:08:58.721936 2255814 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.721964 2255814 pod_ready.go:81] duration metric: took 2.025944093s waiting for pod "kube-apiserver-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.721978 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.728483 2255814 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.728509 2255814 pod_ready.go:81] duration metric: took 6.525188ms waiting for pod "kube-controller-manager-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.728522 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.868777 2255814 pod_ready.go:92] pod "kube-proxy-ldgjr" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:58.868821 2255814 pod_ready.go:81] duration metric: took 140.280926ms waiting for pod "kube-proxy-ldgjr" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:58.868839 2255814 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:59.266668 2255814 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace has status "Ready":"True"
	I0911 12:08:59.266699 2255814 pod_ready.go:81] duration metric: took 397.852252ms waiting for pod "kube-scheduler-default-k8s-diff-port-484027" in "kube-system" namespace to be "Ready" ...
	I0911 12:08:59.266710 2255814 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:01.578711 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:00.172738 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.28.1: (2.866760661s)
	I0911 12:09:00.172779 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.28.1 from cache
	I0911 12:09:00.172904 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:09:00.172989 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1
	I0911 12:09:01.745988 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.28.1: (1.572965994s)
	I0911 12:09:01.746029 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.28.1 from cache
	I0911 12:09:01.746047 2255048 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:09:01.746105 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1
	I0911 12:09:03.140327 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:05.141268 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:04.080056 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:06.578690 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:03.814358 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.28.1: (2.068208039s)
	I0911 12:09:03.814432 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.28.1 from cache
	I0911 12:09:03.814452 2255048 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:09:03.814516 2255048 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0911 12:09:04.982461 2255048 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5: (1.167909383s)
	I0911 12:09:04.982505 2255048 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0911 12:09:04.982542 2255048 cache_images.go:123] Successfully loaded all cached images
	I0911 12:09:04.982549 2255048 cache_images.go:92] LoadImages completed in 20.798002598s
	I0911 12:09:04.982647 2255048 ssh_runner.go:195] Run: crio config
	I0911 12:09:05.047992 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:09:05.048024 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:09:05.048049 2255048 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:09:05.048070 2255048 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.157 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-352076 NodeName:no-preload-352076 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.157"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.157 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPat
h:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:09:05.048268 2255048 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.157
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-352076"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.157
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.157"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:09:05.048352 2255048 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=no-preload-352076 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.157
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:no-preload-352076 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:09:05.048427 2255048 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:09:05.060720 2255048 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:09:05.060881 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:09:05.072228 2255048 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (377 bytes)
	I0911 12:09:05.093943 2255048 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:09:05.113383 2255048 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2104 bytes)
	I0911 12:09:05.136859 2255048 ssh_runner.go:195] Run: grep 192.168.72.157	control-plane.minikube.internal$ /etc/hosts
	I0911 12:09:05.143807 2255048 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.157	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:09:05.160629 2255048 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076 for IP: 192.168.72.157
	I0911 12:09:05.160686 2255048 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:09:05.161057 2255048 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:09:05.161131 2255048 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:09:05.161253 2255048 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.key
	I0911 12:09:05.161367 2255048 certs.go:315] skipping minikube signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.key.66fc92c5
	I0911 12:09:05.161447 2255048 certs.go:315] skipping aggregator signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.key
	I0911 12:09:05.161605 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:09:05.161646 2255048 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:09:05.161655 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:09:05.161696 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:09:05.161745 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:09:05.161773 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:09:05.161838 2255048 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:09:05.162864 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:09:05.196273 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 12:09:05.226310 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:09:05.259094 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:09:05.296329 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:09:05.329537 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:09:05.363893 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:09:05.398183 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:09:05.431986 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:09:05.462584 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:09:05.494047 2255048 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:09:05.531243 2255048 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:09:05.554858 2255048 ssh_runner.go:195] Run: openssl version
	I0911 12:09:05.564158 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:09:05.578611 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.585480 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.585563 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:09:05.592835 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:09:05.606413 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:09:05.618978 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.626101 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.626179 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:09:05.634526 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:09:05.648394 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:09:05.664598 2255048 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.671632 2255048 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.671734 2255048 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:09:05.679143 2255048 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:09:05.691797 2255048 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:09:05.698734 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0911 12:09:05.706797 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0911 12:09:05.713927 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0911 12:09:05.721394 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0911 12:09:05.728652 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0911 12:09:05.736364 2255048 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0911 12:09:05.744505 2255048 kubeadm.go:404] StartCluster: {Name:no-preload-352076 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:no-preload-352076 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:09:05.744673 2255048 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:09:05.744751 2255048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:09:05.783568 2255048 cri.go:89] found id: ""
	I0911 12:09:05.783665 2255048 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:09:05.794403 2255048 kubeadm.go:419] found existing configuration files, will attempt cluster restart
	I0911 12:09:05.794443 2255048 kubeadm.go:636] restartCluster start
	I0911 12:09:05.794532 2255048 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0911 12:09:05.808458 2255048 kubeadm.go:127] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:05.809808 2255048 kubeconfig.go:92] found "no-preload-352076" server: "https://192.168.72.157:8443"
	I0911 12:09:05.812541 2255048 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0911 12:09:05.824406 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:05.824488 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:05.838004 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:05.838029 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:05.838081 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:05.850725 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:06.351553 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:06.351683 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:06.365583 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:06.851068 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:06.851203 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:06.865829 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.351654 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:07.351840 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:07.365462 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.851109 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:07.851227 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:07.865132 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:08.351854 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:08.351980 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:08.364980 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:07.637342 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:09.637899 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:11.638591 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:09.078188 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:11.575790 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:08.850933 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:08.851079 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:08.865313 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:09.350825 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:09.350918 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:09.363633 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:09.850908 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:09.851009 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:09.864051 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:10.351371 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:10.351459 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:10.364187 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:10.851868 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:10.851993 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:10.865706 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:11.351327 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:11.351445 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:11.364860 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:11.851490 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:11.851579 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:11.865090 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:12.351698 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:12.351841 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:12.365554 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:12.851082 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:12.851189 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:12.863359 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:13.351652 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:13.351762 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:13.364220 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:13.638913 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:16.138385 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:14.075701 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:16.083424 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:13.851558 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:13.851650 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:13.864548 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:14.351104 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:14.351196 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:14.363567 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:14.851181 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:14.851287 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:14.865371 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:15.351813 2255048 api_server.go:166] Checking apiserver status ...
	I0911 12:09:15.351921 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0911 12:09:15.364728 2255048 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0911 12:09:15.825491 2255048 kubeadm.go:611] needs reconfigure: apiserver error: context deadline exceeded
	I0911 12:09:15.825532 2255048 kubeadm.go:1128] stopping kube-system containers ...
	I0911 12:09:15.825549 2255048 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0911 12:09:15.825628 2255048 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:09:15.863098 2255048 cri.go:89] found id: ""
	I0911 12:09:15.863207 2255048 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0911 12:09:15.881673 2255048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:09:15.892264 2255048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:09:15.892363 2255048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:09:15.903142 2255048 kubeadm.go:713] reconfiguring cluster from /var/tmp/minikube/kubeadm.yaml
	I0911 12:09:15.903168 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:16.075542 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.073042 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.305269 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.399770 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:17.484630 2255048 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:09:17.484713 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:17.502746 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:18.017919 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:18.139562 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:20.643130 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:18.578074 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:21.077490 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:18.517850 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:19.018007 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:19.518125 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:20.018229 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:09:20.062967 2255048 api_server.go:72] duration metric: took 2.578334133s to wait for apiserver process to appear ...
	I0911 12:09:20.062999 2255048 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:09:20.063024 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:20.063765 2255048 api_server.go:269] stopped: https://192.168.72.157:8443/healthz: Get "https://192.168.72.157:8443/healthz": dial tcp 192.168.72.157:8443: connect: connection refused
	I0911 12:09:20.063812 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:20.064348 2255048 api_server.go:269] stopped: https://192.168.72.157:8443/healthz: Get "https://192.168.72.157:8443/healthz": dial tcp 192.168.72.157:8443: connect: connection refused
	I0911 12:09:20.564847 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.276251 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:09:24.276297 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:09:24.276314 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.320049 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0911 12:09:24.320081 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0911 12:09:24.564444 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:24.570484 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:09:24.570524 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:09:25.064830 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:25.071229 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0911 12:09:25.071269 2255048 api_server.go:103] status: https://192.168.72.157:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0911 12:09:25.564901 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:09:25.570887 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
	ok
	I0911 12:09:25.580713 2255048 api_server.go:141] control plane version: v1.28.1
	I0911 12:09:25.580746 2255048 api_server.go:131] duration metric: took 5.517738896s to wait for apiserver health ...
	I0911 12:09:25.580759 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:09:25.580768 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:09:25.583425 2255048 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:09:23.139085 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.140565 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:23.077522 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.576471 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:25.585300 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:09:25.610742 2255048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:09:25.660757 2255048 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:09:25.680043 2255048 system_pods.go:59] 8 kube-system pods found
	I0911 12:09:25.680087 2255048 system_pods.go:61] "coredns-5dd5756b68-mghg7" [380c0d4e-d7e3-4434-9757-f4debc5206d5] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:09:25.680104 2255048 system_pods.go:61] "etcd-no-preload-352076" [4f74cb61-25fb-4478-afd4-3b0f0ef1bdae] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0911 12:09:25.680115 2255048 system_pods.go:61] "kube-apiserver-no-preload-352076" [09ed0349-f0dc-4ab0-b057-230daeb8e7c1] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0911 12:09:25.680127 2255048 system_pods.go:61] "kube-controller-manager-no-preload-352076" [c93ec6ac-408b-4859-b45b-79bb3e3b53d1] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0911 12:09:25.680142 2255048 system_pods.go:61] "kube-proxy-f748l" [8379d15e-e886-48cb-8a53-3a48aef7c9e7] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:09:25.680157 2255048 system_pods.go:61] "kube-scheduler-no-preload-352076" [7e7068d1-7f6b-4fe7-b1f4-73ddab4c7db4] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0911 12:09:25.680174 2255048 system_pods.go:61] "metrics-server-57f55c9bc5-tvrkk" [7b463025-d2f8-4f1d-aa69-740cd828c1d9] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:09:25.680188 2255048 system_pods.go:61] "storage-provisioner" [52928c2e-1383-41b0-817c-203d016da7df] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:09:25.680201 2255048 system_pods.go:74] duration metric: took 19.417405ms to wait for pod list to return data ...
	I0911 12:09:25.680220 2255048 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:09:25.685088 2255048 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:09:25.685127 2255048 node_conditions.go:123] node cpu capacity is 2
	I0911 12:09:25.685144 2255048 node_conditions.go:105] duration metric: took 4.914847ms to run NodePressure ...
	I0911 12:09:25.685170 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0911 12:09:26.127026 2255048 kubeadm.go:772] waiting for restarted kubelet to initialise ...
	I0911 12:09:26.137211 2255048 kubeadm.go:787] kubelet initialised
	I0911 12:09:26.137247 2255048 kubeadm.go:788] duration metric: took 10.126758ms waiting for restarted kubelet to initialise ...
	I0911 12:09:26.137258 2255048 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:09:26.144732 2255048 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:28.168555 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:27.637951 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.142107 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:32.144784 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:28.078707 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.575535 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:32.575917 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:30.169198 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:31.168599 2255048 pod_ready.go:92] pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:31.168623 2255048 pod_ready.go:81] duration metric: took 5.02386193s waiting for pod "coredns-5dd5756b68-mghg7" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:31.168633 2255048 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:32.194954 2255048 pod_ready.go:92] pod "etcd-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:32.194986 2255048 pod_ready.go:81] duration metric: took 1.026346965s waiting for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:32.194997 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:33.218527 2255048 pod_ready.go:92] pod "kube-apiserver-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:33.218555 2255048 pod_ready.go:81] duration metric: took 1.02355184s waiting for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:33.218568 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:34.637330 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:36.638472 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:34.577030 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:37.076594 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:35.576857 2255048 pod_ready.go:102] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:38.072765 2255048 pod_ready.go:92] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.072791 2255048 pod_ready.go:81] duration metric: took 4.854217828s waiting for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.072807 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-f748l" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.080177 2255048 pod_ready.go:92] pod "kube-proxy-f748l" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.080219 2255048 pod_ready.go:81] duration metric: took 7.386736ms waiting for pod "kube-proxy-f748l" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.080234 2255048 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.086910 2255048 pod_ready.go:92] pod "kube-scheduler-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:09:38.086935 2255048 pod_ready.go:81] duration metric: took 6.692353ms waiting for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:38.086947 2255048 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace to be "Ready" ...
	I0911 12:09:39.139899 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:41.638556 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:39.076977 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:41.077356 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:40.275588 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:42.279343 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:44.140467 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.638950 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:43.575930 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.075946 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:44.773655 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:46.773783 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.639947 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:51.136953 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.076228 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:50.076280 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:52.575191 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:48.781871 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:51.276719 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:53.137841 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:55.639201 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:54.575724 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:56.577539 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:53.774303 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:55.775398 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:57.776172 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:58.137820 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:00.140032 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:09:59.075343 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:01.077352 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:00.274288 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:02.281024 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:02.637659 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:04.638359 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:07.138194 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:03.576039 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:05.581746 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:04.774609 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:06.777649 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:09.638158 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:12.138452 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:08.086089 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:10.577034 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:09.274229 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:11.773772 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:14.637905 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:17.137141 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:13.075497 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:15.075928 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:17.077025 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:13.777087 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:16.273244 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:18.274393 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:19.138225 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:21.638206 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:19.574944 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:21.577126 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:20.274987 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:22.774026 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:23.638427 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:25.639796 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:24.077660 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:26.576065 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:25.274996 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:27.773877 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:28.143807 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:30.639138 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:28.576550 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:31.076503 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:29.775191 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:32.275040 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:33.137429 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:35.137961 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:37.141067 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:33.575704 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:35.576704 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:34.773882 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:36.774534 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:39.637647 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:41.639902 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:38.076297 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:40.577008 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:38.774671 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:41.274312 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:43.274935 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:44.137187 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:46.141314 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:43.079758 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:45.589530 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:45.774930 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.273321 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.638868 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:51.139417 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:48.076212 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:50.078989 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:52.575259 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:50.274454 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:52.275086 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:53.637980 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:55.638403 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:54.575452 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:56.575714 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:54.777442 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:57.273658 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:58.136668 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:00.137799 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:59.077541 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:01.576462 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:10:59.275476 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:01.773680 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:02.636537 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:04.637865 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:07.136712 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:04.078863 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:06.577886 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:03.776995 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:06.274574 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:08.275266 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:09.137886 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:11.147508 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:09.075793 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:11.575828 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:10.275357 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:12.775241 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:13.638603 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:16.137986 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:14.076435 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:16.078427 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:15.275325 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:17.275446 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:18.138511 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:20.638477 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:18.575789 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:20.575987 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:22.576545 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:19.774865 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:22.280364 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:23.138801 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:25.638249 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:24.577693 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:26.581497 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:24.774606 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:27.274878 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:27.639126 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.640834 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:32.138497 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.079788 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:31.575364 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:29.774769 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:31.777925 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:34.636906 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:36.640855 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:33.576041 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:35.577513 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:34.275601 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:36.282120 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:39.138445 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:41.638724 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:38.074500 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:40.077237 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:42.078135 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:38.774882 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:40.776485 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:43.277653 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:43.639224 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:46.137265 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:44.574433 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:46.576378 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:45.776572 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.275210 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.137470 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:50.638249 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:48.580531 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:51.076018 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:50.775117 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:52.775535 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:52.641468 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.138561 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.138875 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:53.078788 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.079529 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.577003 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:55.274582 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:57.774611 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:11:59.637786 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:01.644407 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:00.075246 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:02.078022 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:00.274022 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:02.275711 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.137692 2255187 pod_ready.go:102] pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.614957 2255187 pod_ready.go:81] duration metric: took 4m0.000726123s waiting for pod "metrics-server-57f55c9bc5-ldjwn" in "kube-system" namespace to be "Ready" ...
	E0911 12:12:04.614999 2255187 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:12:04.615020 2255187 pod_ready.go:38] duration metric: took 4m6.604014313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:12:04.615056 2255187 kubeadm.go:640] restartCluster took 4m25.597873734s
	W0911 12:12:04.615156 2255187 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0911 12:12:04.615268 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0911 12:12:04.576764 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:06.579533 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:04.779450 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:07.276202 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:08.580439 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:11.075465 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:09.277634 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:11.776920 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:13.076473 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:15.077335 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:17.574470 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:14.276806 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:16.774759 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:19.576080 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:22.078686 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:18.775173 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:21.274723 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:23.276576 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:24.082590 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:26.584485 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:25.277284 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:27.774953 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:29.079400 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:31.575879 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:30.278194 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:32.773872 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:37.434471 2255187 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.819147659s)
	I0911 12:12:37.434634 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:12:37.450370 2255187 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:12:37.463019 2255187 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:12:37.473313 2255187 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:12:37.473375 2255187 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:12:33.578208 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:36.076227 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:34.775135 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:36.775239 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:37.703004 2255187 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:12:38.574884 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:40.577027 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:38.779298 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:41.274039 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:43.076990 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:45.077566 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:47.576057 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:43.775208 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:45.775382 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:48.274401 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:49.022486 2255187 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 12:12:49.022566 2255187 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 12:12:49.022667 2255187 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 12:12:49.022825 2255187 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 12:12:49.022994 2255187 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 12:12:49.023081 2255187 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 12:12:49.025047 2255187 out.go:204]   - Generating certificates and keys ...
	I0911 12:12:49.025151 2255187 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 12:12:49.025249 2255187 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 12:12:49.025340 2255187 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0911 12:12:49.025428 2255187 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0911 12:12:49.025521 2255187 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0911 12:12:49.025599 2255187 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0911 12:12:49.025703 2255187 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0911 12:12:49.025801 2255187 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0911 12:12:49.025898 2255187 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0911 12:12:49.026021 2255187 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0911 12:12:49.026083 2255187 kubeadm.go:322] [certs] Using the existing "sa" key
	I0911 12:12:49.026163 2255187 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 12:12:49.026252 2255187 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 12:12:49.026338 2255187 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 12:12:49.026436 2255187 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 12:12:49.026518 2255187 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 12:12:49.026609 2255187 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 12:12:49.026694 2255187 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 12:12:49.028378 2255187 out.go:204]   - Booting up control plane ...
	I0911 12:12:49.028469 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 12:12:49.028538 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 12:12:49.028632 2255187 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 12:12:49.028759 2255187 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 12:12:49.028894 2255187 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 12:12:49.028960 2255187 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 12:12:49.029126 2255187 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 12:12:49.029225 2255187 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.504895 seconds
	I0911 12:12:49.029346 2255187 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:12:49.029485 2255187 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:12:49.029568 2255187 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:12:49.029801 2255187 kubeadm.go:322] [mark-control-plane] Marking the node embed-certs-235462 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:12:49.029864 2255187 kubeadm.go:322] [bootstrap-token] Using token: u1pjdn.ynd5x30gs2d5ngse
	I0911 12:12:49.031514 2255187 out.go:204]   - Configuring RBAC rules ...
	I0911 12:12:49.031635 2255187 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:12:49.031766 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:12:49.031961 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:12:49.032100 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:12:49.032234 2255187 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:12:49.032370 2255187 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:12:49.032513 2255187 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:12:49.032569 2255187 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:12:49.032641 2255187 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:12:49.032653 2255187 kubeadm.go:322] 
	I0911 12:12:49.032721 2255187 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:12:49.032733 2255187 kubeadm.go:322] 
	I0911 12:12:49.032850 2255187 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:12:49.032862 2255187 kubeadm.go:322] 
	I0911 12:12:49.032897 2255187 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:12:49.032954 2255187 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:12:49.033027 2255187 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:12:49.033034 2255187 kubeadm.go:322] 
	I0911 12:12:49.033113 2255187 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:12:49.033125 2255187 kubeadm.go:322] 
	I0911 12:12:49.033185 2255187 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:12:49.033194 2255187 kubeadm.go:322] 
	I0911 12:12:49.033272 2255187 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:12:49.033364 2255187 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:12:49.033478 2255187 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:12:49.033488 2255187 kubeadm.go:322] 
	I0911 12:12:49.033592 2255187 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:12:49.033674 2255187 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:12:49.033681 2255187 kubeadm.go:322] 
	I0911 12:12:49.033793 2255187 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token u1pjdn.ynd5x30gs2d5ngse \
	I0911 12:12:49.033940 2255187 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:12:49.033981 2255187 kubeadm.go:322] 	--control-plane 
	I0911 12:12:49.033994 2255187 kubeadm.go:322] 
	I0911 12:12:49.034117 2255187 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:12:49.034140 2255187 kubeadm.go:322] 
	I0911 12:12:49.034253 2255187 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token u1pjdn.ynd5x30gs2d5ngse \
	I0911 12:12:49.034398 2255187 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:12:49.034424 2255187 cni.go:84] Creating CNI manager for ""
	I0911 12:12:49.034438 2255187 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:12:49.036358 2255187 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:12:49.037952 2255187 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:12:49.078613 2255187 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:12:49.171320 2255187 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:12:49.171458 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.171492 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=embed-certs-235462 minikube.k8s.io/updated_at=2023_09_11T12_12_49_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.227806 2255187 ops.go:34] apiserver oom_adj: -16
	I0911 12:12:49.533909 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.637357 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:50.234909 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:50.734249 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:51.234928 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:51.734543 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:52.235022 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:49.576947 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.075970 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:50.275288 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.775973 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:52.734323 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:53.234558 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:53.734598 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.235197 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.734524 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:55.234539 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:55.734806 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:56.234833 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:56.734868 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:57.235336 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:54.574674 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:56.577723 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:54.777705 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:57.274282 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:57.735164 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:58.234340 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:58.734332 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:59.234884 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:12:59.734265 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:13:00.234310 2255187 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:13:00.376532 2255187 kubeadm.go:1081] duration metric: took 11.205145428s to wait for elevateKubeSystemPrivileges.
	I0911 12:13:00.376577 2255187 kubeadm.go:406] StartCluster complete in 5m21.403889838s
	I0911 12:13:00.376632 2255187 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:13:00.376754 2255187 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:13:00.379195 2255187 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:13:00.379496 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:13:00.379604 2255187 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:13:00.379714 2255187 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-235462"
	I0911 12:13:00.379735 2255187 addons.go:231] Setting addon storage-provisioner=true in "embed-certs-235462"
	W0911 12:13:00.379744 2255187 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:13:00.379770 2255187 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:13:00.379813 2255187 addons.go:69] Setting default-storageclass=true in profile "embed-certs-235462"
	I0911 12:13:00.379829 2255187 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-235462"
	I0911 12:13:00.379872 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.380021 2255187 addons.go:69] Setting metrics-server=true in profile "embed-certs-235462"
	I0911 12:13:00.380038 2255187 addons.go:231] Setting addon metrics-server=true in "embed-certs-235462"
	W0911 12:13:00.380053 2255187 addons.go:240] addon metrics-server should already be in state true
	I0911 12:13:00.380092 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.380276 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380299 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.380314 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380338 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.380443 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.380464 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.400206 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36695
	I0911 12:13:00.400222 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41425
	I0911 12:13:00.400384 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35671
	I0911 12:13:00.400955 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.400990 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.400957 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.401597 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.401619 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.401749 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.401769 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.402081 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402237 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.402249 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.402314 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402602 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.402785 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.402950 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.402972 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.402986 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.403016 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.424319 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37805
	I0911 12:13:00.424352 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35791
	I0911 12:13:00.424991 2255187 addons.go:231] Setting addon default-storageclass=true in "embed-certs-235462"
	W0911 12:13:00.425015 2255187 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:13:00.425039 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.425053 2255187 host.go:66] Checking if "embed-certs-235462" exists ...
	I0911 12:13:00.425387 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.425471 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.425496 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.425891 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.425904 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.426206 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.426222 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.426644 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.426842 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.428151 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.429014 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.431494 2255187 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:13:00.429852 2255187 kapi.go:248] "coredns" deployment in "kube-system" namespace and "embed-certs-235462" context rescaled to 1 replicas
	I0911 12:13:00.430039 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.433081 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:13:00.433096 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:13:00.433121 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.433184 2255187 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.50.96 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:13:00.438048 2255187 out.go:177] * Verifying Kubernetes components...
	I0911 12:13:00.436324 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.437532 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.438207 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.442076 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:00.442211 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.442240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.443931 2255187 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:13:00.442451 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.445563 2255187 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:13:00.445579 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.445583 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:13:00.445606 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.445674 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.449267 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46113
	I0911 12:13:00.449534 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.449823 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.450240 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.450270 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.450451 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.450818 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.450838 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.450906 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.451120 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.451298 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.452043 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.452652 2255187 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:13:00.452686 2255187 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:13:00.470652 2255187 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33093
	I0911 12:13:00.471240 2255187 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:13:00.471865 2255187 main.go:141] libmachine: Using API Version  1
	I0911 12:13:00.471888 2255187 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:13:00.472326 2255187 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:13:00.472745 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetState
	I0911 12:13:00.474485 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .DriverName
	I0911 12:13:00.475072 2255187 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:13:00.475093 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:13:00.475123 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHHostname
	I0911 12:13:00.478333 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.478757 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2b:a0:6e", ip: ""} in network mk-embed-certs-235462: {Iface:virbr4 ExpiryTime:2023-09-11 12:59:07 +0000 UTC Type:0 Mac:52:54:00:2b:a0:6e Iaid: IPaddr:192.168.50.96 Prefix:24 Hostname:embed-certs-235462 Clientid:01:52:54:00:2b:a0:6e}
	I0911 12:13:00.478788 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | domain embed-certs-235462 has defined IP address 192.168.50.96 and MAC address 52:54:00:2b:a0:6e in network mk-embed-certs-235462
	I0911 12:13:00.478949 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHPort
	I0911 12:13:00.479157 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHKeyPath
	I0911 12:13:00.479301 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .GetSSHUsername
	I0911 12:13:00.479434 2255187 sshutil.go:53] new ssh client: &{IP:192.168.50.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/embed-certs-235462/id_rsa Username:docker}
	I0911 12:13:00.601913 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:13:00.601946 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:13:00.629483 2255187 node_ready.go:35] waiting up to 6m0s for node "embed-certs-235462" to be "Ready" ...
	I0911 12:13:00.629938 2255187 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 12:13:00.651067 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:13:00.653504 2255187 node_ready.go:49] node "embed-certs-235462" has status "Ready":"True"
	I0911 12:13:00.653549 2255187 node_ready.go:38] duration metric: took 24.023395ms waiting for node "embed-certs-235462" to be "Ready" ...
	I0911 12:13:00.653564 2255187 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:00.663033 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:13:00.663075 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:13:00.668515 2255187 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.709787 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:13:00.751534 2255187 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:13:00.751565 2255187 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:13:00.782859 2255187 pod_ready.go:92] pod "etcd-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:00.782894 2255187 pod_ready.go:81] duration metric: took 114.332855ms waiting for pod "etcd-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.782910 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.823512 2255187 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:13:00.891619 2255187 pod_ready.go:92] pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:00.891678 2255187 pod_ready.go:81] duration metric: took 108.758908ms waiting for pod "kube-apiserver-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:00.891695 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.001447 2255187 pod_ready.go:92] pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:01.001483 2255187 pod_ready.go:81] duration metric: took 109.778603ms waiting for pod "kube-controller-manager-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.001501 2255187 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.164166 2255187 pod_ready.go:92] pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace has status "Ready":"True"
	I0911 12:13:01.164205 2255187 pod_ready.go:81] duration metric: took 162.694687ms waiting for pod "kube-scheduler-embed-certs-235462" in "kube-system" namespace to be "Ready" ...
	I0911 12:13:01.164216 2255187 pod_ready.go:38] duration metric: took 510.637428ms for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:01.164239 2255187 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:13:01.164300 2255187 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:12:59.081781 2255814 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace has status "Ready":"False"
	I0911 12:12:59.267524 2255814 pod_ready.go:81] duration metric: took 4m0.000791617s waiting for pod "metrics-server-57f55c9bc5-tw6td" in "kube-system" namespace to be "Ready" ...
	E0911 12:12:59.267566 2255814 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:12:59.267580 2255814 pod_ready.go:38] duration metric: took 4m2.605912471s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:12:59.267603 2255814 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:12:59.267645 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:12:59.267855 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:12:59.332014 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:12:59.332042 2255814 cri.go:89] found id: ""
	I0911 12:12:59.332053 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:12:59.332135 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.338400 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:12:59.338493 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:12:59.373232 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:12:59.373284 2255814 cri.go:89] found id: ""
	I0911 12:12:59.373296 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:12:59.373371 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.379199 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:12:59.379288 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:12:59.415804 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:12:59.415840 2255814 cri.go:89] found id: ""
	I0911 12:12:59.415852 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:12:59.415940 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.422256 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:12:59.422343 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:12:59.462300 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:12:59.462327 2255814 cri.go:89] found id: ""
	I0911 12:12:59.462336 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:12:59.462392 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.467244 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:12:59.467364 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:12:59.499594 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:12:59.499619 2255814 cri.go:89] found id: ""
	I0911 12:12:59.499627 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:12:59.499697 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.504481 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:12:59.504570 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:12:59.536588 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:12:59.536620 2255814 cri.go:89] found id: ""
	I0911 12:12:59.536631 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:12:59.536701 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.541454 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:12:59.541529 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:12:59.577953 2255814 cri.go:89] found id: ""
	I0911 12:12:59.577990 2255814 logs.go:284] 0 containers: []
	W0911 12:12:59.578001 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:12:59.578010 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:12:59.578084 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:12:59.616256 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:12:59.616283 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:12:59.616288 2255814 cri.go:89] found id: ""
	I0911 12:12:59.616296 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:12:59.616350 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.621818 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:12:59.627431 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:12:59.627462 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:12:59.690633 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:12:59.690681 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:12:59.733084 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:12:59.733133 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:12:59.775174 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:12:59.775220 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:12:59.829438 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:12:59.829492 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:12:59.894842 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:12:59.894888 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:12:59.936662 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:12:59.936703 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:12:59.955507 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:12:59.955544 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:00.127082 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:00.127129 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:00.178458 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:00.178501 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:00.226759 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:00.226805 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:00.267586 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:00.267637 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:00.311431 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:00.311465 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:12:59.276905 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:01.775061 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:02.733813 2255187 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.103819607s)
	I0911 12:13:02.733859 2255187 start.go:901] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0911 12:13:03.298110 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.646997747s)
	I0911 12:13:03.298169 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298179 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298209 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.588380755s)
	I0911 12:13:03.298256 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298278 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298545 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298566 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.298577 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298586 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298596 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298611 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.298622 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.298834 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.298851 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.298891 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.298904 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.299077 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.299104 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.299117 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.299127 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.299083 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.299459 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.299474 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.485702 2255187 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (2.321356388s)
	I0911 12:13:03.485741 2255187 api_server.go:72] duration metric: took 3.052522714s to wait for apiserver process to appear ...
	I0911 12:13:03.485748 2255187 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:13:03.485768 2255187 api_server.go:253] Checking apiserver healthz at https://192.168.50.96:8443/healthz ...
	I0911 12:13:03.485987 2255187 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.66240811s)
	I0911 12:13:03.486070 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.486090 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.486553 2255187 main.go:141] libmachine: (embed-certs-235462) DBG | Closing plugin on server side
	I0911 12:13:03.486621 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.486642 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.486666 2255187 main.go:141] libmachine: Making call to close driver server
	I0911 12:13:03.486683 2255187 main.go:141] libmachine: (embed-certs-235462) Calling .Close
	I0911 12:13:03.486940 2255187 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:13:03.486956 2255187 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:13:03.486968 2255187 addons.go:467] Verifying addon metrics-server=true in "embed-certs-235462"
	I0911 12:13:03.489450 2255187 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0911 12:13:03.491514 2255187 addons.go:502] enable addons completed in 3.11190942s: enabled=[storage-provisioner default-storageclass metrics-server]
	I0911 12:13:03.571696 2255187 api_server.go:279] https://192.168.50.96:8443/healthz returned 200:
	ok
	I0911 12:13:03.576690 2255187 api_server.go:141] control plane version: v1.28.1
	I0911 12:13:03.576730 2255187 api_server.go:131] duration metric: took 90.974437ms to wait for apiserver health ...
	I0911 12:13:03.576743 2255187 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:13:03.592687 2255187 system_pods.go:59] 8 kube-system pods found
	I0911 12:13:03.592734 2255187 system_pods.go:61] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.592745 2255187 system_pods.go:61] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.592753 2255187 system_pods.go:61] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.592761 2255187 system_pods.go:61] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.592769 2255187 system_pods.go:61] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.592778 2255187 system_pods.go:61] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.592787 2255187 system_pods.go:61] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.592802 2255187 system_pods.go:61] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.592839 2255187 system_pods.go:74] duration metric: took 16.087864ms to wait for pod list to return data ...
	I0911 12:13:03.592855 2255187 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:13:03.606427 2255187 default_sa.go:45] found service account: "default"
	I0911 12:13:03.606517 2255187 default_sa.go:55] duration metric: took 13.6536ms for default service account to be created ...
	I0911 12:13:03.606542 2255187 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:13:03.622692 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:03.622752 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.622765 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.622777 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.622786 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.622801 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.622814 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.622980 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.623076 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.623157 2255187 retry.go:31] will retry after 240.25273ms: missing components: kube-dns, kube-proxy
	I0911 12:13:03.874980 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:03.875031 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:03.875041 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:03.875048 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:03.875081 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:03.875094 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:03.875104 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:03.875118 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:03.875130 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:03.875163 2255187 retry.go:31] will retry after 285.300702ms: missing components: kube-dns, kube-proxy
	I0911 12:13:04.171503 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:04.171548 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:04.171558 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:04.171566 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:04.171574 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:04.171580 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:04.171587 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:04.171598 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:04.171607 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:04.171632 2255187 retry.go:31] will retry after 386.395514ms: missing components: kube-dns, kube-proxy
	I0911 12:13:04.565931 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:04.565972 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:04.565982 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:04.565991 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:04.565998 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:04.566007 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:04.566015 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:04.566025 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:04.566039 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:04.566062 2255187 retry.go:31] will retry after 526.673ms: missing components: kube-dns, kube-proxy
	I0911 12:13:05.104101 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:05.104230 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0911 12:13:05.104257 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:05.104277 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:05.104294 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:05.104312 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0911 12:13:05.104336 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:05.104353 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:05.104363 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:05.104385 2255187 retry.go:31] will retry after 628.795734ms: missing components: kube-dns, kube-proxy
	I0911 12:13:05.745358 2255187 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:05.745392 2255187 system_pods.go:89] "coredns-5dd5756b68-hzq9f" [21c42924-879d-49a2-977d-4b8457984928] Running
	I0911 12:13:05.745400 2255187 system_pods.go:89] "etcd-embed-certs-235462" [5549ff6c-5b3e-4cda-ade5-bdfafd2ae79f] Running
	I0911 12:13:05.745408 2255187 system_pods.go:89] "kube-apiserver-embed-certs-235462" [4fe3c745-608e-4490-aed1-ad717874bd11] Running
	I0911 12:13:05.745416 2255187 system_pods.go:89] "kube-controller-manager-embed-certs-235462" [93f4d1e3-d168-47aa-828e-48a2da7e6376] Running
	I0911 12:13:05.745421 2255187 system_pods.go:89] "kube-proxy-zlcth" [5b02a945-710a-45aa-94b1-aab1f6f0f685] Running
	I0911 12:13:05.745427 2255187 system_pods.go:89] "kube-scheduler-embed-certs-235462" [239d1059-e718-4960-ac0b-9a3731f624bb] Running
	I0911 12:13:05.745440 2255187 system_pods.go:89] "metrics-server-57f55c9bc5-qbrf2" [086e38b9-c5da-4c0a-bed5-a97ffda47d36] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:05.745451 2255187 system_pods.go:89] "storage-provisioner" [1930e88f-3cd5-4235-aefa-106e5d92fcab] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0911 12:13:05.745463 2255187 system_pods.go:126] duration metric: took 2.138903103s to wait for k8s-apps to be running ...
	I0911 12:13:05.745480 2255187 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:13:05.745540 2255187 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:05.762725 2255187 system_svc.go:56] duration metric: took 17.229678ms WaitForService to wait for kubelet.
	I0911 12:13:05.762766 2255187 kubeadm.go:581] duration metric: took 5.329544538s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:13:05.762793 2255187 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:13:05.767056 2255187 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:13:05.767087 2255187 node_conditions.go:123] node cpu capacity is 2
	I0911 12:13:05.767112 2255187 node_conditions.go:105] duration metric: took 4.314286ms to run NodePressure ...
	I0911 12:13:05.767131 2255187 start.go:228] waiting for startup goroutines ...
	I0911 12:13:05.767138 2255187 start.go:233] waiting for cluster config update ...
	I0911 12:13:05.767147 2255187 start.go:242] writing updated cluster config ...
	I0911 12:13:05.767462 2255187 ssh_runner.go:195] Run: rm -f paused
	I0911 12:13:05.823796 2255187 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:13:05.826336 2255187 out.go:177] * Done! kubectl is now configured to use "embed-certs-235462" cluster and "default" namespace by default
	I0911 12:13:03.450576 2255814 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:13:03.472433 2255814 api_server.go:72] duration metric: took 4m14.685379298s to wait for apiserver process to appear ...
	I0911 12:13:03.472469 2255814 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:13:03.472520 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:13:03.472614 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:13:03.515433 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:03.515471 2255814 cri.go:89] found id: ""
	I0911 12:13:03.515483 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:13:03.515560 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.521654 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:13:03.521745 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:13:03.569379 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:03.569406 2255814 cri.go:89] found id: ""
	I0911 12:13:03.569416 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:13:03.569481 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.574638 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:13:03.574723 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:13:03.610693 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:03.610722 2255814 cri.go:89] found id: ""
	I0911 12:13:03.610733 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:13:03.610794 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.615774 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:13:03.615894 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:13:03.657087 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:03.657117 2255814 cri.go:89] found id: ""
	I0911 12:13:03.657129 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:13:03.657211 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.662224 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:13:03.662315 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:13:03.698282 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:03.698359 2255814 cri.go:89] found id: ""
	I0911 12:13:03.698381 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:13:03.698466 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.704160 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:13:03.704246 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:13:03.748122 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:03.748152 2255814 cri.go:89] found id: ""
	I0911 12:13:03.748162 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:13:03.748238 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.752657 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:13:03.752742 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:13:03.786815 2255814 cri.go:89] found id: ""
	I0911 12:13:03.786853 2255814 logs.go:284] 0 containers: []
	W0911 12:13:03.786863 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:13:03.786871 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:13:03.786942 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:13:03.824384 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:03.824409 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:03.824414 2255814 cri.go:89] found id: ""
	I0911 12:13:03.824421 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:13:03.824497 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.830317 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:03.836320 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:03.836355 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:03.887480 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:13:03.887524 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:03.930466 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:13:03.930507 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:03.966522 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:13:03.966563 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:13:04.026111 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:13:04.026168 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:13:04.045422 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:13:04.045468 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:04.185127 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:04.185179 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:04.235047 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:04.235089 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:13:04.856084 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:13:04.856134 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:13:04.903388 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:04.903433 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:04.964861 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:04.964916 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:05.007565 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:13:05.007605 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:05.069630 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:13:05.069676 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:07.608676 2255814 api_server.go:253] Checking apiserver healthz at https://192.168.39.230:8444/healthz ...
	I0911 12:13:07.615388 2255814 api_server.go:279] https://192.168.39.230:8444/healthz returned 200:
	ok
	I0911 12:13:07.617076 2255814 api_server.go:141] control plane version: v1.28.1
	I0911 12:13:07.617101 2255814 api_server.go:131] duration metric: took 4.14462443s to wait for apiserver health ...
	I0911 12:13:07.617110 2255814 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:13:07.617138 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0911 12:13:07.617196 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0911 12:13:07.656726 2255814 cri.go:89] found id: "07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:07.656750 2255814 cri.go:89] found id: ""
	I0911 12:13:07.656760 2255814 logs.go:284] 1 containers: [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45]
	I0911 12:13:07.656850 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.661277 2255814 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0911 12:13:07.661364 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0911 12:13:07.697717 2255814 cri.go:89] found id: "153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:07.697746 2255814 cri.go:89] found id: ""
	I0911 12:13:07.697754 2255814 logs.go:284] 1 containers: [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7]
	I0911 12:13:07.697842 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.703800 2255814 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0911 12:13:07.703888 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0911 12:13:07.747003 2255814 cri.go:89] found id: "8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:07.747033 2255814 cri.go:89] found id: ""
	I0911 12:13:07.747043 2255814 logs.go:284] 1 containers: [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27]
	I0911 12:13:07.747122 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.751932 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0911 12:13:07.752007 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0911 12:13:07.785348 2255814 cri.go:89] found id: "fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:07.785375 2255814 cri.go:89] found id: ""
	I0911 12:13:07.785386 2255814 logs.go:284] 1 containers: [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6]
	I0911 12:13:07.785460 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.790170 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0911 12:13:07.790237 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0911 12:13:07.827467 2255814 cri.go:89] found id: "08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:07.827496 2255814 cri.go:89] found id: ""
	I0911 12:13:07.827510 2255814 logs.go:284] 1 containers: [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124]
	I0911 12:13:07.827583 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.834478 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0911 12:13:07.834552 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0911 12:13:07.873739 2255814 cri.go:89] found id: "169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:07.873766 2255814 cri.go:89] found id: ""
	I0911 12:13:07.873774 2255814 logs.go:284] 1 containers: [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6]
	I0911 12:13:07.873828 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.878424 2255814 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0911 12:13:07.878528 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0911 12:13:07.916665 2255814 cri.go:89] found id: ""
	I0911 12:13:07.916696 2255814 logs.go:284] 0 containers: []
	W0911 12:13:07.916708 2255814 logs.go:286] No container was found matching "kindnet"
	I0911 12:13:07.916716 2255814 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0911 12:13:07.916780 2255814 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0911 12:13:07.950146 2255814 cri.go:89] found id: "8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:07.950172 2255814 cri.go:89] found id: "f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:07.950177 2255814 cri.go:89] found id: ""
	I0911 12:13:07.950185 2255814 logs.go:284] 2 containers: [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329]
	I0911 12:13:07.950256 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.954996 2255814 ssh_runner.go:195] Run: which crictl
	I0911 12:13:07.959157 2255814 logs.go:123] Gathering logs for kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] ...
	I0911 12:13:07.959189 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6"
	I0911 12:13:08.027081 2255814 logs.go:123] Gathering logs for kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] ...
	I0911 12:13:08.027112 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6"
	I0911 12:13:03.775843 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:06.274500 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:08.079481 2255814 logs.go:123] Gathering logs for storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] ...
	I0911 12:13:08.079522 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed"
	I0911 12:13:08.118655 2255814 logs.go:123] Gathering logs for kubelet ...
	I0911 12:13:08.118696 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0911 12:13:08.177644 2255814 logs.go:123] Gathering logs for dmesg ...
	I0911 12:13:08.177690 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0911 12:13:08.192495 2255814 logs.go:123] Gathering logs for describe nodes ...
	I0911 12:13:08.192524 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0911 12:13:08.338344 2255814 logs.go:123] Gathering logs for etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] ...
	I0911 12:13:08.338388 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7"
	I0911 12:13:08.385409 2255814 logs.go:123] Gathering logs for coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] ...
	I0911 12:13:08.385454 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27"
	I0911 12:13:08.420999 2255814 logs.go:123] Gathering logs for storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] ...
	I0911 12:13:08.421033 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329"
	I0911 12:13:08.457183 2255814 logs.go:123] Gathering logs for container status ...
	I0911 12:13:08.457223 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0911 12:13:08.500499 2255814 logs.go:123] Gathering logs for kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] ...
	I0911 12:13:08.500531 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45"
	I0911 12:13:08.550546 2255814 logs.go:123] Gathering logs for kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] ...
	I0911 12:13:08.550587 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124"
	I0911 12:13:08.584802 2255814 logs.go:123] Gathering logs for CRI-O ...
	I0911 12:13:08.584854 2255814 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0911 12:13:11.626627 2255814 system_pods.go:59] 8 kube-system pods found
	I0911 12:13:11.626661 2255814 system_pods.go:61] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running
	I0911 12:13:11.626666 2255814 system_pods.go:61] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running
	I0911 12:13:11.626670 2255814 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running
	I0911 12:13:11.626675 2255814 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running
	I0911 12:13:11.626679 2255814 system_pods.go:61] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running
	I0911 12:13:11.626683 2255814 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running
	I0911 12:13:11.626690 2255814 system_pods.go:61] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:11.626696 2255814 system_pods.go:61] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running
	I0911 12:13:11.626702 2255814 system_pods.go:74] duration metric: took 4.009586477s to wait for pod list to return data ...
	I0911 12:13:11.626710 2255814 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:13:11.630703 2255814 default_sa.go:45] found service account: "default"
	I0911 12:13:11.630735 2255814 default_sa.go:55] duration metric: took 4.019315ms for default service account to be created ...
	I0911 12:13:11.630747 2255814 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:13:11.637643 2255814 system_pods.go:86] 8 kube-system pods found
	I0911 12:13:11.637681 2255814 system_pods.go:89] "coredns-5dd5756b68-xszs4" [e58151f1-7503-49df-b847-67ac70d0ef74] Running
	I0911 12:13:11.637687 2255814 system_pods.go:89] "etcd-default-k8s-diff-port-484027" [bdd25816-919a-403d-9291-74b15f755c12] Running
	I0911 12:13:11.637693 2255814 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-484027" [f6461bc2-51bd-49e0-9e86-b6b9ab4f742c] Running
	I0911 12:13:11.637697 2255814 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-484027" [d99aa52a-843d-46b4-a106-ac174ef6a39f] Running
	I0911 12:13:11.637701 2255814 system_pods.go:89] "kube-proxy-ldgjr" [34e5049f-8cba-49bf-96af-f5e0338e4aa5] Running
	I0911 12:13:11.637706 2255814 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-484027" [ccf01bc8-36e3-4f03-855e-06cea0b81d80] Running
	I0911 12:13:11.637713 2255814 system_pods.go:89] "metrics-server-57f55c9bc5-tw6td" [37d0a828-9243-4359-be39-1c2099835e45] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:13:11.637720 2255814 system_pods.go:89] "storage-provisioner" [deb073a7-107f-419d-9b5e-16c7722b957d] Running
	I0911 12:13:11.637727 2255814 system_pods.go:126] duration metric: took 6.974046ms to wait for k8s-apps to be running ...
	I0911 12:13:11.637734 2255814 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:13:11.637781 2255814 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:13:11.656267 2255814 system_svc.go:56] duration metric: took 18.513073ms WaitForService to wait for kubelet.
	I0911 12:13:11.656313 2255814 kubeadm.go:581] duration metric: took 4m22.869270451s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:13:11.656342 2255814 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:13:11.660206 2255814 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:13:11.660242 2255814 node_conditions.go:123] node cpu capacity is 2
	I0911 12:13:11.660256 2255814 node_conditions.go:105] duration metric: took 3.907675ms to run NodePressure ...
	I0911 12:13:11.660271 2255814 start.go:228] waiting for startup goroutines ...
	I0911 12:13:11.660281 2255814 start.go:233] waiting for cluster config update ...
	I0911 12:13:11.660295 2255814 start.go:242] writing updated cluster config ...
	I0911 12:13:11.660673 2255814 ssh_runner.go:195] Run: rm -f paused
	I0911 12:13:11.716963 2255814 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:13:11.719502 2255814 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-484027" cluster and "default" namespace by default
	I0911 12:13:08.774412 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:10.776103 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:13.273773 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:15.274785 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:17.776143 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:20.274491 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:22.276115 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:24.776008 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:26.776415 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:29.274644 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:31.774477 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:33.774923 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:35.776441 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:37.777677 2255048 pod_ready.go:102] pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace has status "Ready":"False"
	I0911 12:13:38.087732 2255048 pod_ready.go:81] duration metric: took 4m0.000743055s waiting for pod "metrics-server-57f55c9bc5-tvrkk" in "kube-system" namespace to be "Ready" ...
	E0911 12:13:38.087774 2255048 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0911 12:13:38.087805 2255048 pod_ready.go:38] duration metric: took 4m11.950533095s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:13:38.087877 2255048 kubeadm.go:640] restartCluster took 4m32.29342443s
	W0911 12:13:38.087958 2255048 out.go:239] ! Unable to restart cluster, will reset it: extra: timed out waiting 4m0s for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready"
	I0911 12:13:38.088001 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0911 12:14:10.169576 2255048 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.081486969s)
	I0911 12:14:10.169706 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:10.189300 2255048 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:14:10.202385 2255048 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:14:10.213749 2255048 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:14:10.213816 2255048 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:14:10.279484 2255048 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 12:14:10.279634 2255048 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 12:14:10.462302 2255048 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 12:14:10.462488 2255048 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 12:14:10.462634 2255048 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 12:14:10.659475 2255048 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 12:14:10.661923 2255048 out.go:204]   - Generating certificates and keys ...
	I0911 12:14:10.662086 2255048 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 12:14:10.662142 2255048 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 12:14:10.662223 2255048 kubeadm.go:322] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0911 12:14:10.662303 2255048 kubeadm.go:322] [certs] Using existing front-proxy-ca certificate authority
	I0911 12:14:10.663973 2255048 kubeadm.go:322] [certs] Using existing front-proxy-client certificate and key on disk
	I0911 12:14:10.665836 2255048 kubeadm.go:322] [certs] Using existing etcd/ca certificate authority
	I0911 12:14:10.667292 2255048 kubeadm.go:322] [certs] Using existing etcd/server certificate and key on disk
	I0911 12:14:10.668584 2255048 kubeadm.go:322] [certs] Using existing etcd/peer certificate and key on disk
	I0911 12:14:10.669931 2255048 kubeadm.go:322] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0911 12:14:10.670570 2255048 kubeadm.go:322] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0911 12:14:10.671008 2255048 kubeadm.go:322] [certs] Using the existing "sa" key
	I0911 12:14:10.671087 2255048 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 12:14:10.865541 2255048 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 12:14:11.063586 2255048 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 12:14:11.341833 2255048 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 12:14:11.573561 2255048 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 12:14:11.574128 2255048 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 12:14:11.577101 2255048 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 12:14:11.579311 2255048 out.go:204]   - Booting up control plane ...
	I0911 12:14:11.579427 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 12:14:11.579550 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 12:14:11.579644 2255048 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 12:14:11.598440 2255048 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 12:14:11.599446 2255048 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 12:14:11.599531 2255048 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 12:14:11.738771 2255048 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 12:14:21.243059 2255048 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503809 seconds
	I0911 12:14:21.243215 2255048 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:14:21.262148 2255048 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:14:21.802567 2255048 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:14:21.802822 2255048 kubeadm.go:322] [mark-control-plane] Marking the node no-preload-352076 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:14:22.320035 2255048 kubeadm.go:322] [bootstrap-token] Using token: 3xtym4.6ytyj76o1n15fsq8
	I0911 12:14:22.321759 2255048 out.go:204]   - Configuring RBAC rules ...
	I0911 12:14:22.321922 2255048 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:14:22.329851 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:14:22.344882 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:14:22.349640 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:14:22.354357 2255048 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:14:22.359463 2255048 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:14:22.380068 2255048 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:14:22.713378 2255048 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:14:22.780207 2255048 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:14:22.780252 2255048 kubeadm.go:322] 
	I0911 12:14:22.780331 2255048 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:14:22.780344 2255048 kubeadm.go:322] 
	I0911 12:14:22.780441 2255048 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:14:22.780450 2255048 kubeadm.go:322] 
	I0911 12:14:22.780489 2255048 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:14:22.780568 2255048 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:14:22.780648 2255048 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:14:22.780657 2255048 kubeadm.go:322] 
	I0911 12:14:22.780757 2255048 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:14:22.780791 2255048 kubeadm.go:322] 
	I0911 12:14:22.780876 2255048 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:14:22.780895 2255048 kubeadm.go:322] 
	I0911 12:14:22.780958 2255048 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:14:22.781054 2255048 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:14:22.781157 2255048 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:14:22.781168 2255048 kubeadm.go:322] 
	I0911 12:14:22.781264 2255048 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:14:22.781363 2255048 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:14:22.781374 2255048 kubeadm.go:322] 
	I0911 12:14:22.781490 2255048 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 3xtym4.6ytyj76o1n15fsq8 \
	I0911 12:14:22.781618 2255048 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:14:22.781684 2255048 kubeadm.go:322] 	--control-plane 
	I0911 12:14:22.781695 2255048 kubeadm.go:322] 
	I0911 12:14:22.781813 2255048 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:14:22.781830 2255048 kubeadm.go:322] 
	I0911 12:14:22.781956 2255048 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 3xtym4.6ytyj76o1n15fsq8 \
	I0911 12:14:22.782107 2255048 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:14:22.783393 2255048 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:14:22.783423 2255048 cni.go:84] Creating CNI manager for ""
	I0911 12:14:22.783434 2255048 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:14:22.785623 2255048 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:14:22.787278 2255048 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:14:22.817914 2255048 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:14:22.857165 2255048 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:14:22.857266 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:22.857282 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=no-preload-352076 minikube.k8s.io/updated_at=2023_09_11T12_14_22_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:23.375677 2255048 ops.go:34] apiserver oom_adj: -16
	I0911 12:14:23.375731 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:23.497980 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:24.128149 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:24.627110 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:25.127658 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:25.627595 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:26.127143 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:26.627803 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:27.128061 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:27.627169 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:28.128081 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:28.628055 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:29.127187 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:29.627707 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:30.127233 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:30.627943 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:31.127222 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:31.627921 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:32.127760 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:32.628112 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:33.128107 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:33.627835 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:34.127171 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:34.627113 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:35.127499 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:35.627255 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:36.127199 2255048 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:14:36.314187 2255048 kubeadm.go:1081] duration metric: took 13.456994708s to wait for elevateKubeSystemPrivileges.
	I0911 12:14:36.314241 2255048 kubeadm.go:406] StartCluster complete in 5m30.569752421s
	I0911 12:14:36.314272 2255048 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:14:36.314446 2255048 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:14:36.317402 2255048 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:14:36.317739 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:14:36.318031 2255048 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:14:36.317936 2255048 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:14:36.318110 2255048 addons.go:69] Setting storage-provisioner=true in profile "no-preload-352076"
	I0911 12:14:36.318135 2255048 addons.go:231] Setting addon storage-provisioner=true in "no-preload-352076"
	I0911 12:14:36.318137 2255048 addons.go:69] Setting default-storageclass=true in profile "no-preload-352076"
	I0911 12:14:36.318148 2255048 addons.go:69] Setting metrics-server=true in profile "no-preload-352076"
	I0911 12:14:36.318163 2255048 addons.go:231] Setting addon metrics-server=true in "no-preload-352076"
	I0911 12:14:36.318164 2255048 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-352076"
	W0911 12:14:36.318169 2255048 addons.go:240] addon metrics-server should already be in state true
	I0911 12:14:36.318218 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	W0911 12:14:36.318143 2255048 addons.go:240] addon storage-provisioner should already be in state true
	I0911 12:14:36.318318 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	I0911 12:14:36.318696 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318710 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318720 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.318723 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.318738 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.318741 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.337905 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41655
	I0911 12:14:36.338002 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36991
	I0911 12:14:36.338589 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.338678 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.339313 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.339317 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.339340 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.339363 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.339435 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39609
	I0911 12:14:36.339903 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.339909 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.339981 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.340160 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.340463 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.340496 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.340588 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.340617 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.341051 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.341512 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.341540 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.359712 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44673
	I0911 12:14:36.360342 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.360914 2255048 addons.go:231] Setting addon default-storageclass=true in "no-preload-352076"
	W0911 12:14:36.360941 2255048 addons.go:240] addon default-storageclass should already be in state true
	I0911 12:14:36.360969 2255048 host.go:66] Checking if "no-preload-352076" exists ...
	I0911 12:14:36.360969 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.360984 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.361238 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.361271 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.361350 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.361540 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.362624 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36153
	I0911 12:14:36.363381 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.363731 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.364093 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.364114 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.366385 2255048 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:14:36.364716 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.368526 2255048 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:14:36.368557 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:14:36.368640 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.368799 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.371211 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.374123 2255048 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0911 12:14:36.373727 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.374507 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.376914 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.376951 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.376846 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0911 12:14:36.376970 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0911 12:14:36.376991 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.377194 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.377424 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.377656 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.380757 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.381482 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.381508 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.381537 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.381783 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.381953 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.382098 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.383003 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42733
	I0911 12:14:36.383415 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.383860 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.383884 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.384174 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.384600 2255048 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:14:36.384650 2255048 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:14:36.401421 2255048 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38633
	I0911 12:14:36.401987 2255048 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:14:36.402660 2255048 main.go:141] libmachine: Using API Version  1
	I0911 12:14:36.402684 2255048 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:14:36.403172 2255048 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:14:36.403456 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetState
	I0911 12:14:36.406003 2255048 main.go:141] libmachine: (no-preload-352076) Calling .DriverName
	I0911 12:14:36.406531 2255048 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:14:36.406567 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:14:36.406593 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHHostname
	I0911 12:14:36.410520 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.411016 2255048 main.go:141] libmachine: (no-preload-352076) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:91:89:e0", ip: ""} in network mk-no-preload-352076: {Iface:virbr2 ExpiryTime:2023-09-11 12:58:42 +0000 UTC Type:0 Mac:52:54:00:91:89:e0 Iaid: IPaddr:192.168.72.157 Prefix:24 Hostname:no-preload-352076 Clientid:01:52:54:00:91:89:e0}
	I0911 12:14:36.411072 2255048 main.go:141] libmachine: (no-preload-352076) DBG | domain no-preload-352076 has defined IP address 192.168.72.157 and MAC address 52:54:00:91:89:e0 in network mk-no-preload-352076
	I0911 12:14:36.411331 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHPort
	I0911 12:14:36.411517 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHKeyPath
	I0911 12:14:36.411723 2255048 main.go:141] libmachine: (no-preload-352076) Calling .GetSSHUsername
	I0911 12:14:36.411895 2255048 sshutil.go:53] new ssh client: &{IP:192.168.72.157 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/no-preload-352076/id_rsa Username:docker}
	I0911 12:14:36.448234 2255048 kapi.go:248] "coredns" deployment in "kube-system" namespace and "no-preload-352076" context rescaled to 1 replicas
	I0911 12:14:36.448281 2255048 start.go:223] Will wait 6m0s for node &{Name: IP:192.168.72.157 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:14:36.450615 2255048 out.go:177] * Verifying Kubernetes components...
	I0911 12:14:36.452566 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:36.600188 2255048 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 12:14:36.600187 2255048 node_ready.go:35] waiting up to 6m0s for node "no-preload-352076" to be "Ready" ...
	I0911 12:14:36.611125 2255048 node_ready.go:49] node "no-preload-352076" has status "Ready":"True"
	I0911 12:14:36.611167 2255048 node_ready.go:38] duration metric: took 10.942009ms waiting for node "no-preload-352076" to be "Ready" ...
	I0911 12:14:36.611181 2255048 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:14:36.632729 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0911 12:14:36.632759 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0911 12:14:36.640639 2255048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:36.656421 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:14:36.659146 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:14:36.711603 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0911 12:14:36.711644 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0911 12:14:36.780574 2255048 addons.go:423] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:14:36.780614 2255048 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0911 12:14:36.874964 2255048 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0911 12:14:38.569895 2255048 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.969647165s)
	I0911 12:14:38.569949 2255048 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0911 12:14:38.569895 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.91343277s)
	I0911 12:14:38.570001 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570017 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.570428 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.570469 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.570484 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570440 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.570495 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.570786 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.570801 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.570803 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.570820 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:38.570830 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:38.571133 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:38.571183 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:38.571196 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:38.756212 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:14:39.258501 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.599303563s)
	I0911 12:14:39.258567 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.258581 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.258631 2255048 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.383622497s)
	I0911 12:14:39.258693 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.258713 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259000 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259069 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259129 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259139 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.259040 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259150 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259154 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259165 2255048 main.go:141] libmachine: Making call to close driver server
	I0911 12:14:39.259178 2255048 main.go:141] libmachine: (no-preload-352076) Calling .Close
	I0911 12:14:39.259042 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259444 2255048 main.go:141] libmachine: (no-preload-352076) DBG | Closing plugin on server side
	I0911 12:14:39.259444 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259468 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259514 2255048 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:14:39.259605 2255048 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:14:39.259620 2255048 addons.go:467] Verifying addon metrics-server=true in "no-preload-352076"
	I0911 12:14:39.261573 2255048 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0911 12:14:39.263513 2255048 addons.go:502] enable addons completed in 2.945573816s: enabled=[default-storageclass storage-provisioner metrics-server]
	I0911 12:14:41.194698 2255048 pod_ready.go:102] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"False"
	I0911 12:14:41.682872 2255048 pod_ready.go:92] pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.682904 2255048 pod_ready.go:81] duration metric: took 5.042231142s waiting for pod "coredns-5dd5756b68-6w2w7" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.682919 2255048 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.685265 2255048 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ddttk" not found
	I0911 12:14:41.685295 2255048 pod_ready.go:81] duration metric: took 2.370305ms waiting for pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace to be "Ready" ...
	E0911 12:14:41.685306 2255048 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ddttk" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ddttk" not found
	I0911 12:14:41.685313 2255048 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.694255 2255048 pod_ready.go:92] pod "etcd-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.694295 2255048 pod_ready.go:81] duration metric: took 8.974837ms waiting for pod "etcd-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.694309 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.700807 2255048 pod_ready.go:92] pod "kube-apiserver-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.700854 2255048 pod_ready.go:81] duration metric: took 6.536644ms waiting for pod "kube-apiserver-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.700869 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.707895 2255048 pod_ready.go:92] pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.707918 2255048 pod_ready.go:81] duration metric: took 7.041207ms waiting for pod "kube-controller-manager-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.707930 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-f5w2x" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.880293 2255048 pod_ready.go:92] pod "kube-proxy-f5w2x" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:41.880329 2255048 pod_ready.go:81] duration metric: took 172.39121ms waiting for pod "kube-proxy-f5w2x" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:41.880345 2255048 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:42.280038 2255048 pod_ready.go:92] pod "kube-scheduler-no-preload-352076" in "kube-system" namespace has status "Ready":"True"
	I0911 12:14:42.280066 2255048 pod_ready.go:81] duration metric: took 399.713688ms waiting for pod "kube-scheduler-no-preload-352076" in "kube-system" namespace to be "Ready" ...
	I0911 12:14:42.280074 2255048 pod_ready.go:38] duration metric: took 5.668879257s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:14:42.280093 2255048 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:14:42.280143 2255048 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:14:42.303868 2255048 api_server.go:72] duration metric: took 5.855535753s to wait for apiserver process to appear ...
	I0911 12:14:42.303906 2255048 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:14:42.303927 2255048 api_server.go:253] Checking apiserver healthz at https://192.168.72.157:8443/healthz ...
	I0911 12:14:42.310890 2255048 api_server.go:279] https://192.168.72.157:8443/healthz returned 200:
	ok
	I0911 12:14:42.313428 2255048 api_server.go:141] control plane version: v1.28.1
	I0911 12:14:42.313455 2255048 api_server.go:131] duration metric: took 9.541682ms to wait for apiserver health ...
	I0911 12:14:42.313464 2255048 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:14:42.483863 2255048 system_pods.go:59] 8 kube-system pods found
	I0911 12:14:42.483895 2255048 system_pods.go:61] "coredns-5dd5756b68-6w2w7" [fe585a8f-a92f-4497-b399-d759c995f9e6] Running
	I0911 12:14:42.483900 2255048 system_pods.go:61] "etcd-no-preload-352076" [46c6483c-1301-4aa4-9851-b66116ac5b8a] Running
	I0911 12:14:42.483905 2255048 system_pods.go:61] "kube-apiserver-no-preload-352076" [a01ff260-818b-46a3-99c2-c5e4f8fa2610] Running
	I0911 12:14:42.483909 2255048 system_pods.go:61] "kube-controller-manager-no-preload-352076" [0f697ec3-cabf-41f8-bdb0-8b39a70deafd] Running
	I0911 12:14:42.483912 2255048 system_pods.go:61] "kube-proxy-f5w2x" [03e8a2b5-aaf8-4fd7-920e-033a44729398] Running
	I0911 12:14:42.483916 2255048 system_pods.go:61] "kube-scheduler-no-preload-352076" [ddbb31d1-9bc6-4466-bd7b-81f433959677] Running
	I0911 12:14:42.483923 2255048 system_pods.go:61] "metrics-server-57f55c9bc5-r8mgg" [a54edaa0-b800-48f3-99bc-7d38adb834d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:14:42.483930 2255048 system_pods.go:61] "storage-provisioner" [c5d1acfb-fa11-4a73-9176-21aee3e2ab99] Running
	I0911 12:14:42.483936 2255048 system_pods.go:74] duration metric: took 170.467243ms to wait for pod list to return data ...
	I0911 12:14:42.483945 2255048 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:14:42.679235 2255048 default_sa.go:45] found service account: "default"
	I0911 12:14:42.679270 2255048 default_sa.go:55] duration metric: took 195.319105ms for default service account to be created ...
	I0911 12:14:42.679284 2255048 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:14:42.883048 2255048 system_pods.go:86] 8 kube-system pods found
	I0911 12:14:42.883078 2255048 system_pods.go:89] "coredns-5dd5756b68-6w2w7" [fe585a8f-a92f-4497-b399-d759c995f9e6] Running
	I0911 12:14:42.883084 2255048 system_pods.go:89] "etcd-no-preload-352076" [46c6483c-1301-4aa4-9851-b66116ac5b8a] Running
	I0911 12:14:42.883089 2255048 system_pods.go:89] "kube-apiserver-no-preload-352076" [a01ff260-818b-46a3-99c2-c5e4f8fa2610] Running
	I0911 12:14:42.883093 2255048 system_pods.go:89] "kube-controller-manager-no-preload-352076" [0f697ec3-cabf-41f8-bdb0-8b39a70deafd] Running
	I0911 12:14:42.883097 2255048 system_pods.go:89] "kube-proxy-f5w2x" [03e8a2b5-aaf8-4fd7-920e-033a44729398] Running
	I0911 12:14:42.883103 2255048 system_pods.go:89] "kube-scheduler-no-preload-352076" [ddbb31d1-9bc6-4466-bd7b-81f433959677] Running
	I0911 12:14:42.883110 2255048 system_pods.go:89] "metrics-server-57f55c9bc5-r8mgg" [a54edaa0-b800-48f3-99bc-7d38adb834d0] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0911 12:14:42.883118 2255048 system_pods.go:89] "storage-provisioner" [c5d1acfb-fa11-4a73-9176-21aee3e2ab99] Running
	I0911 12:14:42.883126 2255048 system_pods.go:126] duration metric: took 203.835523ms to wait for k8s-apps to be running ...
	I0911 12:14:42.883133 2255048 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:14:42.883181 2255048 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:14:42.897962 2255048 system_svc.go:56] duration metric: took 14.812893ms WaitForService to wait for kubelet.
	I0911 12:14:42.898000 2255048 kubeadm.go:581] duration metric: took 6.449678905s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:14:42.898022 2255048 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:14:43.080859 2255048 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:14:43.080890 2255048 node_conditions.go:123] node cpu capacity is 2
	I0911 12:14:43.080901 2255048 node_conditions.go:105] duration metric: took 182.874167ms to run NodePressure ...
	I0911 12:14:43.080913 2255048 start.go:228] waiting for startup goroutines ...
	I0911 12:14:43.080919 2255048 start.go:233] waiting for cluster config update ...
	I0911 12:14:43.080930 2255048 start.go:242] writing updated cluster config ...
	I0911 12:14:43.081223 2255048 ssh_runner.go:195] Run: rm -f paused
	I0911 12:14:43.135636 2255048 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:14:43.137835 2255048 out.go:177] * Done! kubectl is now configured to use "no-preload-352076" cluster and "default" namespace by default
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 12:07:45 UTC, ends at Mon 2023-09-11 12:26:39 UTC. --
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.625582126Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f22c8a55-6132-4ef4-862a-023919ca523a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.625792606Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f22c8a55-6132-4ef4-862a-023919ca523a name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.633811038Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=09de143e-e02d-455d-8cc3-166e29bfa517 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.633902004Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=09de143e-e02d-455d-8cc3-166e29bfa517 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.634163075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=09de143e-e02d-455d-8cc3-166e29bfa517 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.675665422Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=b3678aad-8b90-4b47-bedd-63a3708c7434 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.675757160Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=b3678aad-8b90-4b47-bedd-63a3708c7434 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.676098086Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=b3678aad-8b90-4b47-bedd-63a3708c7434 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.713927415Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4da04e56-e393-4b57-b5ee-d00a44b931d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.714091097Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4da04e56-e393-4b57-b5ee-d00a44b931d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.714309004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4da04e56-e393-4b57-b5ee-d00a44b931d7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.750592237Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=05867238-aea3-4b20-88de-a4e17f99baa7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.750712409Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=05867238-aea3-4b20-88de-a4e17f99baa7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.752479738Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=05867238-aea3-4b20-88de-a4e17f99baa7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.794858199Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=63ecd1fd-6b62-4b90-8ac7-6ee0273793e7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.794953568Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=63ecd1fd-6b62-4b90-8ac7-6ee0273793e7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.795238306Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=63ecd1fd-6b62-4b90-8ac7-6ee0273793e7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.802188270Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=77363cc2-8320-4d31-b4d3-be903cf405d4 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.802441991Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:5a291a09ceb34efecac50c49738c14680067cf9273aa5a3a0855631ef24087ba,Metadata:&PodSandboxMetadata{Name:metrics-server-74d5856cc6-7w6xl,Uid:85bd354d-1256-4a7c-b592-2c3fc2f5df5b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434124088689301,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-74d5856cc6-7w6xl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 85bd354d-1256-4a7c-b592-2c3fc2f5df5b,k8s-app: metrics-server,pod-template-hash: 74d5856cc6,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:08:43.727289472Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&PodSandboxMetadata{Name:coredns-5644d7b6d9-55m96,Uid:5d921d6f-960e-4606-9b0f-9c53e
ca5f2a2,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434114849508646,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,k8s-app: kube-dns,pod-template-hash: 5644d7b6d9,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:08:26.765905538Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&PodSandboxMetadata{Name:busybox,Uid:7495b31e-0dad-4554-82d1-2aad824ed73d,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434113853391163,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:08
:26.765904093Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&PodSandboxMetadata{Name:kube-proxy-855lt,Uid:1a95a90c-09bc-46e0-a535-232c2edb964e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434107523839604,Labels:map[string]string{controller-revision-hash: 68594d95c,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535-232c2edb964e,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:08:26.765899226Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:f278d62d-eed6-47d4-9a76-388b47b929ec,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434107502789595,Labels:map[str
ing]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernete
s.io/config.seen: 2023-09-11T12:08:26.765902262Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&PodSandboxMetadata{Name:kube-apiserver-old-k8s-version-642215,Uid:5e27da8a03806823491e179b401b1948,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434098358578752,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 5e27da8a03806823491e179b401b1948,kubernetes.io/config.seen: 2023-09-11T12:08:17.78719497Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&PodSandboxMetadata{Name:etcd-old-k8s-version-642215,Uid:f63f2769f725a608fbbfc2cb2cc6d5e7,Nam
espace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434098324962774,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: f63f2769f725a608fbbfc2cb2cc6d5e7,kubernetes.io/config.seen: 2023-09-11T12:08:17.787193172Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&PodSandboxMetadata{Name:kube-scheduler-old-k8s-version-642215,Uid:b3d303074fe0ca1d42a8bd9ed248df09,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434098263942170,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe
0ca1d42a8bd9ed248df09,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b3d303074fe0ca1d42a8bd9ed248df09,kubernetes.io/config.seen: 2023-09-11T12:08:17.787191179Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-old-k8s-version-642215,Uid:b39706a67360d65bfa3cf2560791efe9,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434098259321720,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: b39706a67360d65bfa3cf2560791efe9,kubernetes.io/config.seen: 2023-09-11T12:08:17.787184816Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file=
"go-grpc-middleware/chain.go:25" id=77363cc2-8320-4d31-b4d3-be903cf405d4 name=/runtime.v1alpha2.RuntimeService/ListPodSandbox
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.803203001Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=72c444fd-ac8b-4797-a68e-e612dbd94968 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.803281900Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=72c444fd-ac8b-4797-a68e-e612dbd94968 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.803456079Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.ku
bernetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.co
ntainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[strin
g]string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io
.kubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=72c444fd-ac8b-4797-a68e-e612dbd94968 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.830218015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=321d1345-2195-46da-b1dc-6c132277c1b8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.830309876Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=321d1345-2195-46da-b1dc-6c132277c1b8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:26:39 old-k8s-version-642215 crio[718]: time="2023-09-11 12:26:39.830535846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434139054452807,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-388b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 1,io
.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c,PodSandboxId:9fe8b0836b24b469f018df79df63ad935eee659527e49be80a03f0f569cfcbcd,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/coredns@sha256:12eb885b8685b1b13a04ecf5c23bc809c2e57917252fd7b0be9e9c00644e8ee5,State:CONTAINER_RUNNING,CreatedAt:1694434115259466498,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5644d7b6d9-55m96,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5d921d6f-960e-4606-9b0f-9c53eca5f2a2,},Annotations:map[string]string{io.kubernetes.container.hash: 97d96008,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\"
:\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:397e2be089f5ecf2fcf4a3b548c10d726952c337c772ab3a5d863097838a44e5,PodSandboxId:eba2f18ad5b1d57419f68b0ce8433c1f3343f5d4d6e08535968a3f87de6dadbf,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434115200907976,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kub
ernetes.pod.uid: 7495b31e-0dad-4554-82d1-2aad824ed73d,},Annotations:map[string]string{io.kubernetes.container.hash: b63fd5d8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1,PodSandboxId:86c017cad5241a447bb06562ce31947108440d90a89d1fca94664c096462d8a4,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-proxy@sha256:cbceafaf273cb8f988bbb745cdf92224e481bbd1f26ccc750aecc6614bbf1a5c,State:CONTAINER_RUNNING,CreatedAt:1694434108436285705,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-855lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1a95a90c-09bc-46e0-a535
-232c2edb964e,},Annotations:map[string]string{io.kubernetes.container.hash: 4df6175f,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476,PodSandboxId:80174bcc525b256bc98df62e7ceb801931db19b0a84e94fca4c741717e84d046,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434108361595974,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f278d62d-eed6-47d4-9a76-3
88b47b929ec,},Annotations:map[string]string{io.kubernetes.container.hash: 27254c1e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a,PodSandboxId:389aca9390241e65e244f7a74a8633863891b39929c9b8acb9c4b105e5fe7df3,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/etcd@sha256:12c2c5e5731c3bcd56e6f1c05c0f9198b6f06793fa7fca2fb43aab9622dc4afa,State:CONTAINER_RUNNING,CreatedAt:1694434101205533014,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f63f2769f725a608fbbfc2cb2cc6d5e7,},Annotations:map[string]string{io.kube
rnetes.container.hash: 236c6cfa,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a,PodSandboxId:abb8204cbce23f9b8951b057a83966ffcf5a99f7e9a2b9e362708e9e561c11c3,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-scheduler@sha256:094023ab9cd02059eb0295d234ff9ea321e0e22e4813986d7f1a1ac4dc1990d0,State:CONTAINER_RUNNING,CreatedAt:1694434099272612167,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b3d303074fe0ca1d42a8bd9ed248df09,},Annotations:map[string]string{io.kubernetes.cont
ainer.hash: 69e1a0b8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d,PodSandboxId:2a390cdc636bca2261ebcd7659bcd907c776d7680ee6a700ec4699ecd721e686,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-controller-manager@sha256:21942b42625f47a378008272b5b0b7e0c7f0e1be42569b6163796cebfad4bbf4,State:CONTAINER_RUNNING,CreatedAt:1694434098917825519,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: b39706a67360d65bfa3cf2560791efe9,},Annotations:map[string]
string{io.kubernetes.container.hash: f336ba89,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259,PodSandboxId:2b7d63c9205b81175303bcb17c9cb76109cd7688679d77d9c3eb3b540e09fc18,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e,Annotations:map[string]string{},},ImageRef:k8s.gcr.io/kube-apiserver@sha256:f1f91e317568f866cf9e270bd4827a25993c7ccb7cffa1842eefee92a28388d6,State:CONTAINER_RUNNING,CreatedAt:1694434098749447587,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-old-k8s-version-642215,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5e27da8a03806823491e179b401b1948,},Annotations:map[string]string{io.k
ubernetes.container.hash: e7f22a30,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=321d1345-2195-46da-b1dc-6c132277c1b8 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	252b88d2a887d       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Running             storage-provisioner       1                   80174bcc525b2
	86306e2a9af35       bf261d157914477ee1a5969d28ec687f3fbfc9fb5a664b22df78e57023b0e03b                                      18 minutes ago      Running             coredns                   0                   9fe8b0836b24b
	397e2be089f5e       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   0                   eba2f18ad5b1d
	0100bc00d8805       c21b0c7400f988db4777858edd13b6d3930d62d7ccf026d2415485a52037f384                                      18 minutes ago      Running             kube-proxy                0                   86c017cad5241
	917b7542db061       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       0                   80174bcc525b2
	5b13b1dd138c8       b2756210eeabf84f3221da9959e9483f3919dc2aaab4cd45e7cd072fcbde27ed                                      18 minutes ago      Running             etcd                      0                   389aca9390241
	5e048369058e0       301ddc62b80b16315d3c2653cf3888370394277afb3187614cfa20edc352ca0a                                      18 minutes ago      Running             kube-scheduler            0                   abb8204cbce23
	3fd47e8d5f66c       06a629a7e51cdcc81a5ed6a3e6650348312f20c954ac52ee489a023628ec9c7d                                      18 minutes ago      Running             kube-controller-manager   0                   2a390cdc636bc
	aa4b9a425227b       b305571ca60a5a7818bda47da122683d75e8a1907475681ee8b1efbd06bff12e                                      18 minutes ago      Running             kube-apiserver            0                   2b7d63c9205b8
	
	* 
	* ==> coredns [86306e2a9af35782ee147ffded6c468a063efb323048201afdddec8fc2f69c0c] <==
	* E0911 11:59:37.288184       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288739       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288917       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	Trace[1947692426]: [30.001021226s] [30.001021226s] END
	E0911 11:59:37.288184       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288184       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288184       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Service: Get https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0911 11:59:37.288648       1 trace.go:82] Trace[1543856988]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-09-11 11:59:07.287620561 +0000 UTC m=+0.030279782) (total time: 30.000974962s):
	Trace[1543856988]: [30.000974962s] [30.000974962s] END
	E0911 11:59:37.288739       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288739       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288739       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Namespace: Get https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	I0911 11:59:37.288710       1 trace.go:82] Trace[77262156]: "Reflector pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94 ListAndWatch" (started: 2023-09-11 11:59:07.287381733 +0000 UTC m=+0.030040941) (total time: 30.000718939s):
	Trace[77262156]: [30.000718939s] [30.000718939s] END
	E0911 11:59:37.288917       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288917       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	E0911 11:59:37.288917       1 reflector.go:126] pkg/mod/k8s.io/client-go@v11.0.0+incompatible/tools/cache/reflector.go:94: Failed to list *v1.Endpoints: Get https://10.96.0.1:443/api/v1/endpoints?limit=500&resourceVersion=0: dial tcp 10.96.0.1:443: i/o timeout
	[INFO] SIGTERM: Shutting down servers then terminating
	.:53
	2023-09-11T12:08:35.531Z [INFO] plugin/reload: Running configuration MD5 = 7bc8613a521eb1a1737fc3e7c0fea3ca
	2023-09-11T12:08:35.531Z [INFO] CoreDNS-1.6.2
	2023-09-11T12:08:35.531Z [INFO] linux/amd64, go1.12.8, 795a3eb
	CoreDNS-1.6.2
	linux/amd64, go1.12.8, 795a3eb
	2023-09-11T12:08:36.541Z [INFO] 127.0.0.1:45893 - 52476 "HINFO IN 3455477577780142367.1809258028112430835. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009990164s
	
	* 
	* ==> describe nodes <==
	* Name:               old-k8s-version-642215
	Roles:              master
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=old-k8s-version-642215
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=old-k8s-version-642215
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T11_58_48_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/master=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 11:58:43 +0000
	Taints:             <none>
	Unschedulable:      false
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 12:25:58 +0000   Mon, 11 Sep 2023 11:58:39 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 12:25:58 +0000   Mon, 11 Sep 2023 11:58:39 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 12:25:58 +0000   Mon, 11 Sep 2023 11:58:39 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 12:25:58 +0000   Mon, 11 Sep 2023 12:08:37 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.58
	  Hostname:    old-k8s-version-642215
	Capacity:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	Allocatable:
	 cpu:                2
	 ephemeral-storage:  17784752Ki
	 hugepages-2Mi:      0
	 memory:             2165900Ki
	 pods:               110
	System Info:
	 Machine ID:                 c4a1895d43864ad098ba11bad3a19aef
	 System UUID:                c4a1895d-4386-4ad0-98ba-11bad3a19aef
	 Boot ID:                    f801e2ce-f70e-4d17-aa0d-5cd42b3034dc
	 Kernel Version:             5.10.57
	 OS Image:                   Buildroot 2021.02.12
	 Operating System:           linux
	 Architecture:               amd64
	 Container Runtime Version:  cri-o://1.24.1
	 Kubelet Version:            v1.16.0
	 Kube-Proxy Version:         v1.16.0
	PodCIDR:                     10.244.0.0/24
	PodCIDRs:                    10.244.0.0/24
	Non-terminated Pods:         (9 in total)
	  Namespace                  Name                                              CPU Requests  CPU Limits  Memory Requests  Memory Limits  AGE
	  ---------                  ----                                              ------------  ----------  ---------------  -------------  ---
	  default                    busybox                                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                coredns-5644d7b6d9-55m96                          100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     27m
	  kube-system                etcd-old-k8s-version-642215                       0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                kube-apiserver-old-k8s-version-642215             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-controller-manager-old-k8s-version-642215    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                kube-proxy-855lt                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                kube-scheduler-old-k8s-version-642215             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         26m
	  kube-system                metrics-server-74d5856cc6-7w6xl                   100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                storage-provisioner                               0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                750m (37%!)(MISSING)   0 (0%!)(MISSING)
	  memory             270Mi (12%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From                                Message
	  ----    ------                   ----               ----                                -------
	  Normal  NodeHasSufficientMemory  28m (x8 over 28m)  kubelet, old-k8s-version-642215     Node old-k8s-version-642215 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m (x8 over 28m)  kubelet, old-k8s-version-642215     Node old-k8s-version-642215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m (x7 over 28m)  kubelet, old-k8s-version-642215     Node old-k8s-version-642215 status is now: NodeHasSufficientPID
	  Normal  Starting                 27m                kube-proxy, old-k8s-version-642215  Starting kube-proxy.
	  Normal  Starting                 18m                kubelet, old-k8s-version-642215     Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet, old-k8s-version-642215     Node old-k8s-version-642215 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet, old-k8s-version-642215     Node old-k8s-version-642215 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet, old-k8s-version-642215     Node old-k8s-version-642215 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet, old-k8s-version-642215     Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                kube-proxy, old-k8s-version-642215  Starting kube-proxy.
	
	* 
	* ==> dmesg <==
	* [Sep11 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.109713] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.969078] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.735757] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.154810] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.480068] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000022] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.573276] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.126136] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.189633] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.135092] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.292389] systemd-fstab-generator[702]: Ignoring "noauto" for root device
	[Sep11 12:08] systemd-fstab-generator[1041]: Ignoring "noauto" for root device
	[  +0.463410] kmem.limit_in_bytes is deprecated and will be removed. Please report your usecase to linux-mm@kvack.org if you depend on this functionality.
	[ +16.984808] kauditd_printk_skb: 8 callbacks suppressed
	
	* 
	* ==> etcd [5b13b1dd138c806967498a5037ca394b42688b967130af873a5dabcfd837b67a] <==
	* 2023-09-11 12:08:21.359240 I | etcdserver/api: enabled capabilities for version 3.3
	2023-09-11 12:08:23.034901 I | raft: da8d605abec0c6c9 is starting a new election at term 2
	2023-09-11 12:08:23.034965 I | raft: da8d605abec0c6c9 became candidate at term 3
	2023-09-11 12:08:23.035063 I | raft: da8d605abec0c6c9 received MsgVoteResp from da8d605abec0c6c9 at term 3
	2023-09-11 12:08:23.035079 I | raft: da8d605abec0c6c9 became leader at term 3
	2023-09-11 12:08:23.035087 I | raft: raft.node: da8d605abec0c6c9 elected leader da8d605abec0c6c9 at term 3
	2023-09-11 12:08:23.035484 I | etcdserver: published {Name:old-k8s-version-642215 ClientURLs:[https://192.168.61.58:2379]} to cluster 2d1820130fad6930
	2023-09-11 12:08:23.035597 I | embed: ready to serve client requests
	2023-09-11 12:08:23.035808 I | embed: ready to serve client requests
	2023-09-11 12:08:23.038737 I | embed: serving client requests on 127.0.0.1:2379
	2023-09-11 12:08:23.038853 I | embed: serving client requests on 192.168.61.58:2379
	2023-09-11 12:08:27.062752 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-apiserver-old-k8s-version-642215\" " with result "range_response_count:1 size:3164" took too long (202.255424ms) to execute
	2023-09-11 12:08:27.487950 W | etcdserver: read-only range request "key:\"/registry/minions/old-k8s-version-642215\" " with result "range_response_count:1 size:4117" took too long (626.535231ms) to execute
	2023-09-11 12:08:27.494359 W | etcdserver: read-only range request "key:\"/registry/ranges/servicenodeports\" " with result "range_response_count:1 size:120" took too long (633.070733ms) to execute
	2023-09-11 12:08:27.585228 W | etcdserver: request "header:<ID:14324132389981804169 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/events/default/old-k8s-version-642215.1783d6d59e60d990\" mod_revision:0 > success:<request_put:<key:\"/registry/events/default/old-k8s-version-642215.1783d6d59e60d990\" value_size:300 lease:5100760353127028357 >> failure:<>>" with result "size:16" took too long (288.451167ms) to execute
	2023-09-11 12:08:27.597862 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/kube-controller-manager-old-k8s-version-642215\" " with result "range_response_count:1 size:2852" took too long (109.159968ms) to execute
	2023-09-11 12:08:27.599229 W | etcdserver: read-only range request "key:\"/registry/pods/kube-system/etcd-old-k8s-version-642215\" " with result "range_response_count:1 size:2288" took too long (531.498271ms) to execute
	2023-09-11 12:08:27.599898 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (351.172727ms) to execute
	2023-09-11 12:08:27.627214 W | etcdserver: read-only range request "key:\"/registry/namespaces/kube-system\" " with result "range_response_count:1 size:178" took too long (119.160178ms) to execute
	2023-09-11 12:08:27.627465 W | etcdserver: read-only range request "key:\"/registry/health\" " with result "range_response_count:0 size:5" took too long (119.702891ms) to execute
	2023-09-11 12:08:57.099478 W | etcdserver: read-only range request "key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" " with result "range_response_count:0 size:5" took too long (149.502264ms) to execute
	2023-09-11 12:18:23.072750 I | mvcc: store.index: compact 818
	2023-09-11 12:18:23.077563 I | mvcc: finished scheduled compaction at 818 (took 3.936716ms)
	2023-09-11 12:23:23.080643 I | mvcc: store.index: compact 1036
	2023-09-11 12:23:23.082898 I | mvcc: finished scheduled compaction at 1036 (took 1.730847ms)
	
	* 
	* ==> kernel <==
	*  12:26:40 up 19 min,  0 users,  load average: 0.51, 0.43, 0.25
	Linux old-k8s-version-642215 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [aa4b9a425227b56c1c9d6db4bbe06c9324744b4b0042d30eb4761297fd04d259] <==
	* I0911 12:19:27.834526       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0911 12:19:27.834796       1 handler_proxy.go:99] no RequestInfo found in the context
	E0911 12:19:27.834928       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:19:27.834962       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:21:27.835395       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0911 12:21:27.835525       1 handler_proxy.go:99] no RequestInfo found in the context
	E0911 12:21:27.835595       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:21:27.835617       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:23:27.837492       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0911 12:23:27.837849       1 handler_proxy.go:99] no RequestInfo found in the context
	E0911 12:23:27.838044       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:23:27.838118       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:24:27.838521       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0911 12:24:27.838646       1 handler_proxy.go:99] no RequestInfo found in the context
	E0911 12:24:27.838683       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:24:27.838690       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:26:27.839146       1 controller.go:107] OpenAPI AggregationController: Processing item v1beta1.metrics.k8s.io
	W0911 12:26:27.839232       1 handler_proxy.go:99] no RequestInfo found in the context
	E0911 12:26:27.839297       1 controller.go:114] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:26:27.839304       1 controller.go:127] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3fd47e8d5f66ceb498726573b5ebea68ae9ae526427d99e1550bdc19e887498d] <==
	* E0911 12:20:20.058520       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:20:30.891592       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:20:50.310888       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:21:02.893836       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:21:20.563059       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:21:34.896156       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:21:50.815398       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:22:06.898733       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:22:21.067795       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:22:38.901138       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:22:51.320578       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:23:10.903620       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:23:21.573441       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:23:42.905749       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:23:51.825717       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:24:14.908799       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:24:22.079396       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:24:46.910905       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:24:52.331620       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:25:18.913258       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:25:22.583723       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:25:50.915871       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:25:52.835766       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	W0911 12:26:22.918261       1 garbagecollector.go:640] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request]
	E0911 12:26:23.088291       1 resource_quota_controller.go:407] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request
	
	* 
	* ==> kube-proxy [0100bc00d8805a5f30feabc7f66ebe0323ffe19df9898076d4da8a764f34c2c1] <==
	* W0911 11:59:07.331953       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0911 11:59:07.342518       1 node.go:135] Successfully retrieved node IP: 192.168.61.58
	I0911 11:59:07.342604       1 server_others.go:149] Using iptables Proxier.
	I0911 11:59:07.343266       1 server.go:529] Version: v1.16.0
	I0911 11:59:07.343702       1 config.go:313] Starting service config controller
	I0911 11:59:07.343732       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0911 11:59:07.345320       1 config.go:131] Starting endpoints config controller
	I0911 11:59:07.348510       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0911 11:59:07.444144       1 shared_informer.go:204] Caches are synced for service config 
	I0911 11:59:07.449135       1 shared_informer.go:204] Caches are synced for endpoints config 
	E0911 12:00:21.044182       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: Get https://control-plane.minikube.internal:8443/api/v1/services?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=490&timeout=6m54s&timeoutSeconds=414&watch=true: dial tcp 192.168.61.58:8443: connect: connection refused
	E0911 12:00:21.044690       1 reflector.go:280] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Endpoints: Get https://control-plane.minikube.internal:8443/api/v1/endpoints?allowWatchBookmarks=true&labelSelector=%!s(MISSING)ervice.kubernetes.io%!F(MISSING)headless%!C(MISSING)%!s(MISSING)ervice.kubernetes.io%!F(MISSING)service-proxy-name&resourceVersion=491&timeout=9m1s&timeoutSeconds=541&watch=true: dial tcp 192.168.61.58:8443: connect: connection refused
	W0911 12:08:29.006908       1 server_others.go:329] Flag proxy-mode="" unknown, assuming iptables proxy
	I0911 12:08:29.021959       1 node.go:135] Successfully retrieved node IP: 192.168.61.58
	I0911 12:08:29.022112       1 server_others.go:149] Using iptables Proxier.
	I0911 12:08:29.023473       1 server.go:529] Version: v1.16.0
	I0911 12:08:29.024283       1 config.go:131] Starting endpoints config controller
	I0911 12:08:29.024326       1 shared_informer.go:197] Waiting for caches to sync for endpoints config
	I0911 12:08:29.024379       1 config.go:313] Starting service config controller
	I0911 12:08:29.024385       1 shared_informer.go:197] Waiting for caches to sync for service config
	I0911 12:08:29.125077       1 shared_informer.go:204] Caches are synced for service config 
	I0911 12:08:29.125216       1 shared_informer.go:204] Caches are synced for endpoints config 
	
	* 
	* ==> kube-scheduler [5e048369058e02cc73e51cdd76dc6382168870533640d4efcb549c551bf9558a] <==
	* E0911 11:58:43.367339       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 11:58:44.349215       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 11:58:44.349346       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 11:58:44.349416       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 11:58:44.352263       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0911 11:58:44.352373       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 11:58:44.373457       1 reflector.go:123] k8s.io/kubernetes/cmd/kube-scheduler/app/server.go:236: Failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0911 11:58:44.376392       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 11:58:44.376498       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 11:58:44.376557       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 11:58:44.379431       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 11:58:44.379561       1 reflector.go:123] k8s.io/client-go/informers/factory.go:134: Failed to list *v1beta1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 11:59:04.493588       1 factory.go:585] pod is already present in the activeQ
	E0911 11:59:04.595426       1 factory.go:585] pod is already present in the activeQ
	I0911 12:08:20.259933       1 serving.go:319] Generated self-signed cert in-memory
	W0911 12:08:26.769260       1 authentication.go:262] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 12:08:26.769456       1 authentication.go:199] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 12:08:26.769581       1 authentication.go:200] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 12:08:26.769595       1 authentication.go:201] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 12:08:26.782338       1 server.go:143] Version: v1.16.0
	I0911 12:08:26.782569       1 defaults.go:91] TaintNodesByCondition is enabled, PodToleratesNodeTaints predicate is mandatory
	W0911 12:08:26.787301       1 authorization.go:47] Authorization is disabled
	W0911 12:08:26.787347       1 authentication.go:79] Authentication is disabled
	I0911 12:08:26.787365       1 deprecated_insecure_serving.go:51] Serving healthz insecurely on [::]:10251
	I0911 12:08:26.787827       1 secure_serving.go:123] Serving securely on 127.0.0.1:10259
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 12:07:45 UTC, ends at Mon 2023-09-11 12:26:40 UTC. --
	Sep 11 12:22:10 old-k8s-version-642215 kubelet[1047]: E0911 12:22:10.801233    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:22:22 old-k8s-version-642215 kubelet[1047]: E0911 12:22:22.801060    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:22:36 old-k8s-version-642215 kubelet[1047]: E0911 12:22:36.801093    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:22:49 old-k8s-version-642215 kubelet[1047]: E0911 12:22:49.801304    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:23:04 old-k8s-version-642215 kubelet[1047]: E0911 12:23:04.801492    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:23:17 old-k8s-version-642215 kubelet[1047]: E0911 12:23:17.867247    1047 container_manager_linux.go:510] failed to find cgroups of kubelet - cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service
	Sep 11 12:23:18 old-k8s-version-642215 kubelet[1047]: E0911 12:23:18.801263    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:23:32 old-k8s-version-642215 kubelet[1047]: E0911 12:23:32.801280    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:23:43 old-k8s-version-642215 kubelet[1047]: E0911 12:23:43.802257    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:23:57 old-k8s-version-642215 kubelet[1047]: E0911 12:23:57.802092    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:24:12 old-k8s-version-642215 kubelet[1047]: E0911 12:24:12.801402    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:24:26 old-k8s-version-642215 kubelet[1047]: E0911 12:24:26.801588    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:24:41 old-k8s-version-642215 kubelet[1047]: E0911 12:24:41.832688    1047 remote_image.go:113] PullImage "fake.domain/registry.k8s.io/echoserver:1.4" from image service failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 11 12:24:41 old-k8s-version-642215 kubelet[1047]: E0911 12:24:41.832800    1047 kuberuntime_image.go:50] Pull image "fake.domain/registry.k8s.io/echoserver:1.4" failed: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 11 12:24:41 old-k8s-version-642215 kubelet[1047]: E0911 12:24:41.832866    1047 kuberuntime_manager.go:783] container start failed: ErrImagePull: rpc error: code = Unknown desc = pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 11 12:24:41 old-k8s-version-642215 kubelet[1047]: E0911 12:24:41.832918    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ErrImagePull: "rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host"
	Sep 11 12:24:52 old-k8s-version-642215 kubelet[1047]: E0911 12:24:52.801370    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:25:04 old-k8s-version-642215 kubelet[1047]: E0911 12:25:04.801219    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:25:18 old-k8s-version-642215 kubelet[1047]: E0911 12:25:18.801128    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:25:30 old-k8s-version-642215 kubelet[1047]: E0911 12:25:30.801348    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:25:43 old-k8s-version-642215 kubelet[1047]: E0911 12:25:43.801872    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:25:58 old-k8s-version-642215 kubelet[1047]: E0911 12:25:58.805265    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:26:09 old-k8s-version-642215 kubelet[1047]: E0911 12:26:09.801468    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:26:23 old-k8s-version-642215 kubelet[1047]: E0911 12:26:23.800880    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	Sep 11 12:26:37 old-k8s-version-642215 kubelet[1047]: E0911 12:26:37.807607    1047 pod_workers.go:191] Error syncing pod 85bd354d-1256-4a7c-b592-2c3fc2f5df5b ("metrics-server-74d5856cc6-7w6xl_kube-system(85bd354d-1256-4a7c-b592-2c3fc2f5df5b)"), skipping: failed to "StartContainer" for "metrics-server" with ImagePullBackOff: "Back-off pulling image \"fake.domain/registry.k8s.io/echoserver:1.4\""
	
	* 
	* ==> storage-provisioner [252b88d2a887deabe9c5b3149015de2ab2927a144e28c7bea551912533252ef4] <==
	* I0911 12:08:59.221337       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 12:08:59.237126       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 12:08:59.237301       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 12:09:16.692256       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 12:09:16.692920       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-642215_f7824995-0433-48d5-8675-51099812bef5!
	I0911 12:09:16.692538       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34515631-2adf-4713-905f-9eb8481301ed", APIVersion:"v1", ResourceVersion:"618", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-642215_f7824995-0433-48d5-8675-51099812bef5 became leader
	I0911 12:09:16.794185       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-642215_f7824995-0433-48d5-8675-51099812bef5!
	
	* 
	* ==> storage-provisioner [917b7542db0614edb676fece24f1ba1013a3e009661db9fbc0b2130fa45a2476] <==
	* I0911 11:59:07.720670       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 11:59:07.735303       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 11:59:07.735419       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 11:59:07.749233       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 11:59:07.750185       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_old-k8s-version-642215_e90f9802-99cb-4fb2-ac8f-8b5869f829c1!
	I0911 11:59:07.749640       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"34515631-2adf-4713-905f-9eb8481301ed", APIVersion:"v1", ResourceVersion:"392", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' old-k8s-version-642215_e90f9802-99cb-4fb2-ac8f-8b5869f829c1 became leader
	I0911 11:59:07.851899       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_old-k8s-version-642215_e90f9802-99cb-4fb2-ac8f-8b5869f829c1!
	I0911 12:08:28.658365       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0911 12:08:58.666634       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-642215 -n old-k8s-version-642215
helpers_test.go:261: (dbg) Run:  kubectl --context old-k8s-version-642215 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-74d5856cc6-7w6xl
helpers_test.go:274: ======> post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context old-k8s-version-642215 describe pod metrics-server-74d5856cc6-7w6xl
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context old-k8s-version-642215 describe pod metrics-server-74d5856cc6-7w6xl: exit status 1 (78.708442ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-74d5856cc6-7w6xl" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context old-k8s-version-642215 describe pod metrics-server-74d5856cc6-7w6xl: exit status 1
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (537.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (375.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-235462 -n embed-certs-235462
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-11 12:28:22.72245409 +0000 UTC m=+5508.035078987
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-235462 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-235462 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.709µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-235462 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-235462 -n embed-certs-235462
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-235462 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-235462 logs -n 25: (1.172194747s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| stop    | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642215        | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:01 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-226537 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	|         | disable-driver-mounts-226537                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:02 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-352076                  | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-484027  | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-235462                 | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642215             | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-484027       | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC | 11 Sep 23 12:13 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:26 UTC | 11 Sep 23 12:26 UTC |
	| start   | -p newest-cni-867563 --memory=2200 --alsologtostderr   | newest-cni-867563            | jenkins | v1.31.2 | 11 Sep 23 12:26 UTC | 11 Sep 23 12:27 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:27 UTC | 11 Sep 23 12:27 UTC |
	| start   | -p auto-640433 --memory=3072                           | auto-640433                  | jenkins | v1.31.2 | 11 Sep 23 12:27 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-867563             | newest-cni-867563            | jenkins | v1.31.2 | 11 Sep 23 12:27 UTC | 11 Sep 23 12:27 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-867563                                   | newest-cni-867563            | jenkins | v1.31.2 | 11 Sep 23 12:27 UTC | 11 Sep 23 12:27 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-867563                  | newest-cni-867563            | jenkins | v1.31.2 | 11 Sep 23 12:27 UTC | 11 Sep 23 12:27 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-867563 --memory=2200 --alsologtostderr   | newest-cni-867563            | jenkins | v1.31.2 | 11 Sep 23 12:27 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 12:27:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 12:27:58.563689 2261957 out.go:296] Setting OutFile to fd 1 ...
	I0911 12:27:58.563799 2261957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:27:58.563812 2261957 out.go:309] Setting ErrFile to fd 2...
	I0911 12:27:58.563816 2261957 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:27:58.564007 2261957 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 12:27:58.564572 2261957 out.go:303] Setting JSON to false
	I0911 12:27:58.565597 2261957 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":238230,"bootTime":1694197049,"procs":214,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 12:27:58.565665 2261957 start.go:138] virtualization: kvm guest
	I0911 12:27:58.568424 2261957 out.go:177] * [newest-cni-867563] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 12:27:58.570146 2261957 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 12:27:58.570190 2261957 notify.go:220] Checking for updates...
	I0911 12:27:58.571830 2261957 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 12:27:58.573477 2261957 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:27:58.575079 2261957 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 12:27:58.576620 2261957 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 12:27:58.578267 2261957 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 12:27:58.580191 2261957 config.go:182] Loaded profile config "newest-cni-867563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:27:58.580557 2261957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:27:58.580631 2261957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:27:58.595895 2261957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43825
	I0911 12:27:58.596307 2261957 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:27:58.596960 2261957 main.go:141] libmachine: Using API Version  1
	I0911 12:27:58.596989 2261957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:27:58.597368 2261957 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:27:58.597567 2261957 main.go:141] libmachine: (newest-cni-867563) Calling .DriverName
	I0911 12:27:58.597878 2261957 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 12:27:58.598316 2261957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:27:58.598372 2261957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:27:58.614550 2261957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34455
	I0911 12:27:58.614972 2261957 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:27:58.615463 2261957 main.go:141] libmachine: Using API Version  1
	I0911 12:27:58.615493 2261957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:27:58.615820 2261957 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:27:58.616031 2261957 main.go:141] libmachine: (newest-cni-867563) Calling .DriverName
	I0911 12:27:58.656150 2261957 out.go:177] * Using the kvm2 driver based on existing profile
	I0911 12:27:58.657634 2261957 start.go:298] selected driver: kvm2
	I0911 12:27:58.657649 2261957 start.go:902] validating driver "kvm2" against &{Name:newest-cni-867563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-867563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.4 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false
system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:27:58.657815 2261957 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 12:27:58.658514 2261957 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:27:58.658626 2261957 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 12:27:58.675295 2261957 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 12:27:58.675716 2261957 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0911 12:27:58.675753 2261957 cni.go:84] Creating CNI manager for ""
	I0911 12:27:58.675760 2261957 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:27:58.675767 2261957 start_flags.go:321] config:
	{Name:newest-cni-867563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-867563 Namespace:default AP
IServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.4 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] L
istenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:27:58.675933 2261957 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:27:58.678144 2261957 out.go:177] * Starting control plane node newest-cni-867563 in cluster newest-cni-867563
	I0911 12:27:58.679627 2261957 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:27:58.679677 2261957 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 12:27:58.679687 2261957 cache.go:57] Caching tarball of preloaded images
	I0911 12:27:58.679791 2261957 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 12:27:58.679803 2261957 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 12:27:58.679934 2261957 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/config.json ...
	I0911 12:27:58.680123 2261957 start.go:365] acquiring machines lock for newest-cni-867563: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:27:59.774119 2261957 start.go:369] acquired machines lock for "newest-cni-867563" in 1.093950808s
	I0911 12:27:59.774192 2261957 start.go:96] Skipping create...Using existing machine configuration
	I0911 12:27:59.774205 2261957 fix.go:54] fixHost starting: 
	I0911 12:27:59.774604 2261957 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:27:59.774656 2261957 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:27:59.794807 2261957 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41209
	I0911 12:27:59.795394 2261957 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:27:59.795974 2261957 main.go:141] libmachine: Using API Version  1
	I0911 12:27:59.796002 2261957 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:27:59.796354 2261957 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:27:59.796540 2261957 main.go:141] libmachine: (newest-cni-867563) Calling .DriverName
	I0911 12:27:59.796731 2261957 main.go:141] libmachine: (newest-cni-867563) Calling .GetState
	I0911 12:27:59.798406 2261957 fix.go:102] recreateIfNeeded on newest-cni-867563: state=Stopped err=<nil>
	I0911 12:27:59.798447 2261957 main.go:141] libmachine: (newest-cni-867563) Calling .DriverName
	W0911 12:27:59.798657 2261957 fix.go:128] unexpected machine state, will restart: <nil>
	I0911 12:27:59.801168 2261957 out.go:177] * Restarting existing kvm2 VM for "newest-cni-867563" ...
	I0911 12:27:58.165264 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.165911 2261580 main.go:141] libmachine: (auto-640433) Found IP for machine: 192.168.72.226
	I0911 12:27:58.165934 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has current primary IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.165941 2261580 main.go:141] libmachine: (auto-640433) Reserving static IP address...
	I0911 12:27:58.166446 2261580 main.go:141] libmachine: (auto-640433) DBG | unable to find host DHCP lease matching {name: "auto-640433", mac: "52:54:00:62:ea:89", ip: "192.168.72.226"} in network mk-auto-640433
	I0911 12:27:58.255627 2261580 main.go:141] libmachine: (auto-640433) Reserved static IP address: 192.168.72.226
	I0911 12:27:58.255667 2261580 main.go:141] libmachine: (auto-640433) DBG | Getting to WaitForSSH function...
	I0911 12:27:58.255679 2261580 main.go:141] libmachine: (auto-640433) Waiting for SSH to be available...
	I0911 12:27:58.258569 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.259039 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:minikube Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:58.259087 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.259215 2261580 main.go:141] libmachine: (auto-640433) DBG | Using SSH client type: external
	I0911 12:27:58.259247 2261580 main.go:141] libmachine: (auto-640433) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/auto-640433/id_rsa (-rw-------)
	I0911 12:27:58.259281 2261580 main.go:141] libmachine: (auto-640433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.226 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/auto-640433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:27:58.259303 2261580 main.go:141] libmachine: (auto-640433) DBG | About to run SSH command:
	I0911 12:27:58.259317 2261580 main.go:141] libmachine: (auto-640433) DBG | exit 0
	I0911 12:27:58.361340 2261580 main.go:141] libmachine: (auto-640433) DBG | SSH cmd err, output: <nil>: 
	I0911 12:27:58.361603 2261580 main.go:141] libmachine: (auto-640433) KVM machine creation complete!
	I0911 12:27:58.361955 2261580 main.go:141] libmachine: (auto-640433) Calling .GetConfigRaw
	I0911 12:27:58.362520 2261580 main.go:141] libmachine: (auto-640433) Calling .DriverName
	I0911 12:27:58.362816 2261580 main.go:141] libmachine: (auto-640433) Calling .DriverName
	I0911 12:27:58.363027 2261580 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0911 12:27:58.363047 2261580 main.go:141] libmachine: (auto-640433) Calling .GetState
	I0911 12:27:58.364693 2261580 main.go:141] libmachine: Detecting operating system of created instance...
	I0911 12:27:58.364713 2261580 main.go:141] libmachine: Waiting for SSH to be available...
	I0911 12:27:58.364723 2261580 main.go:141] libmachine: Getting to WaitForSSH function...
	I0911 12:27:58.364732 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHHostname
	I0911 12:27:58.367680 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.368139 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:58.368168 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.368522 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHPort
	I0911 12:27:58.368725 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:58.368937 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:58.369114 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHUsername
	I0911 12:27:58.369349 2261580 main.go:141] libmachine: Using SSH client type: native
	I0911 12:27:58.370021 2261580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.226 22 <nil> <nil>}
	I0911 12:27:58.370040 2261580 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0911 12:27:58.500527 2261580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:27:58.500554 2261580 main.go:141] libmachine: Detecting the provisioner...
	I0911 12:27:58.500574 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHHostname
	I0911 12:27:58.503293 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.503798 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:58.503835 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.504035 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHPort
	I0911 12:27:58.504229 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:58.504399 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:58.504533 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHUsername
	I0911 12:27:58.504760 2261580 main.go:141] libmachine: Using SSH client type: native
	I0911 12:27:58.505383 2261580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.226 22 <nil> <nil>}
	I0911 12:27:58.505398 2261580 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0911 12:27:58.638756 2261580 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0911 12:27:58.638910 2261580 main.go:141] libmachine: found compatible host: buildroot
	I0911 12:27:58.638932 2261580 main.go:141] libmachine: Provisioning with buildroot...
	I0911 12:27:58.638944 2261580 main.go:141] libmachine: (auto-640433) Calling .GetMachineName
	I0911 12:27:58.639268 2261580 buildroot.go:166] provisioning hostname "auto-640433"
	I0911 12:27:58.639292 2261580 main.go:141] libmachine: (auto-640433) Calling .GetMachineName
	I0911 12:27:58.639465 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHHostname
	I0911 12:27:58.642589 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.643064 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:58.643087 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.643272 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHPort
	I0911 12:27:58.643489 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:58.643646 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:58.643823 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHUsername
	I0911 12:27:58.644002 2261580 main.go:141] libmachine: Using SSH client type: native
	I0911 12:27:58.644429 2261580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.226 22 <nil> <nil>}
	I0911 12:27:58.644445 2261580 main.go:141] libmachine: About to run SSH command:
	sudo hostname auto-640433 && echo "auto-640433" | sudo tee /etc/hostname
	I0911 12:27:58.782479 2261580 main.go:141] libmachine: SSH cmd err, output: <nil>: auto-640433
	
	I0911 12:27:58.782522 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHHostname
	I0911 12:27:58.785622 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.786030 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:58.786054 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.786253 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHPort
	I0911 12:27:58.786487 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:58.786701 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:58.786856 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHUsername
	I0911 12:27:58.787002 2261580 main.go:141] libmachine: Using SSH client type: native
	I0911 12:27:58.787475 2261580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.226 22 <nil> <nil>}
	I0911 12:27:58.787493 2261580 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sauto-640433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 auto-640433/g' /etc/hosts;
				else 
					echo '127.0.1.1 auto-640433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:27:58.925561 2261580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:27:58.925614 2261580 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:27:58.925675 2261580 buildroot.go:174] setting up certificates
	I0911 12:27:58.925686 2261580 provision.go:83] configureAuth start
	I0911 12:27:58.925701 2261580 main.go:141] libmachine: (auto-640433) Calling .GetMachineName
	I0911 12:27:58.926049 2261580 main.go:141] libmachine: (auto-640433) Calling .GetIP
	I0911 12:27:58.928979 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.929392 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:58.929426 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.929570 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHHostname
	I0911 12:27:58.931997 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.932355 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:58.932391 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.932507 2261580 provision.go:138] copyHostCerts
	I0911 12:27:58.932605 2261580 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:27:58.932623 2261580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:27:58.932704 2261580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:27:58.932857 2261580 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:27:58.932869 2261580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:27:58.932905 2261580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:27:58.933000 2261580 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:27:58.933015 2261580 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:27:58.933053 2261580 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:27:58.933151 2261580 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.auto-640433 san=[192.168.72.226 192.168.72.226 localhost 127.0.0.1 minikube auto-640433]
	I0911 12:27:58.982795 2261580 provision.go:172] copyRemoteCerts
	I0911 12:27:58.982857 2261580 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:27:58.982885 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHHostname
	I0911 12:27:58.986032 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.986383 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:58.986424 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:58.986616 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHPort
	I0911 12:27:58.986807 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:58.987009 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHUsername
	I0911 12:27:58.987112 2261580 sshutil.go:53] new ssh client: &{IP:192.168.72.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/auto-640433/id_rsa Username:docker}
	I0911 12:27:59.080957 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1212 bytes)
	I0911 12:27:59.106422 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0911 12:27:59.129725 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:27:59.154921 2261580 provision.go:86] duration metric: configureAuth took 229.219608ms
	I0911 12:27:59.154949 2261580 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:27:59.155191 2261580 config.go:182] Loaded profile config "auto-640433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:27:59.155292 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHHostname
	I0911 12:27:59.158594 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.159059 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:59.159110 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.159226 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHPort
	I0911 12:27:59.159447 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:59.159596 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:59.159779 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHUsername
	I0911 12:27:59.159992 2261580 main.go:141] libmachine: Using SSH client type: native
	I0911 12:27:59.160390 2261580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.226 22 <nil> <nil>}
	I0911 12:27:59.160409 2261580 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:27:59.494266 2261580 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:27:59.494303 2261580 main.go:141] libmachine: Checking connection to Docker...
	I0911 12:27:59.494339 2261580 main.go:141] libmachine: (auto-640433) Calling .GetURL
	I0911 12:27:59.496034 2261580 main.go:141] libmachine: (auto-640433) DBG | Using libvirt version 6000000
	I0911 12:27:59.498657 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.499017 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:59.499060 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.499269 2261580 main.go:141] libmachine: Docker is up and running!
	I0911 12:27:59.499281 2261580 main.go:141] libmachine: Reticulating splines...
	I0911 12:27:59.499288 2261580 client.go:171] LocalClient.Create took 29.527516647s
	I0911 12:27:59.499313 2261580 start.go:167] duration metric: libmachine.API.Create for "auto-640433" took 29.527586074s
	I0911 12:27:59.499326 2261580 start.go:300] post-start starting for "auto-640433" (driver="kvm2")
	I0911 12:27:59.499334 2261580 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:27:59.499352 2261580 main.go:141] libmachine: (auto-640433) Calling .DriverName
	I0911 12:27:59.499609 2261580 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:27:59.499643 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHHostname
	I0911 12:27:59.502023 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.502332 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:59.502355 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.502510 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHPort
	I0911 12:27:59.502726 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:59.502886 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHUsername
	I0911 12:27:59.503066 2261580 sshutil.go:53] new ssh client: &{IP:192.168.72.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/auto-640433/id_rsa Username:docker}
	I0911 12:27:59.594757 2261580 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:27:59.599253 2261580 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:27:59.599281 2261580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:27:59.599351 2261580 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:27:59.599429 2261580 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:27:59.599522 2261580 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:27:59.609271 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:27:59.636606 2261580 start.go:303] post-start completed in 137.264576ms
	I0911 12:27:59.636670 2261580 main.go:141] libmachine: (auto-640433) Calling .GetConfigRaw
	I0911 12:27:59.637357 2261580 main.go:141] libmachine: (auto-640433) Calling .GetIP
	I0911 12:27:59.640581 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.641022 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:59.641065 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.641399 2261580 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/config.json ...
	I0911 12:27:59.641617 2261580 start.go:128] duration metric: createHost completed in 29.690389247s
	I0911 12:27:59.641647 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHHostname
	I0911 12:27:59.643968 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.644299 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:59.644332 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.644445 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHPort
	I0911 12:27:59.644647 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:59.644863 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:59.645072 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHUsername
	I0911 12:27:59.645266 2261580 main.go:141] libmachine: Using SSH client type: native
	I0911 12:27:59.645654 2261580 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.72.226 22 <nil> <nil>}
	I0911 12:27:59.645666 2261580 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:27:59.773963 2261580 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694435279.757965004
	
	I0911 12:27:59.773989 2261580 fix.go:206] guest clock: 1694435279.757965004
	I0911 12:27:59.773998 2261580 fix.go:219] Guest: 2023-09-11 12:27:59.757965004 +0000 UTC Remote: 2023-09-11 12:27:59.641635779 +0000 UTC m=+29.814058812 (delta=116.329225ms)
	I0911 12:27:59.774016 2261580 fix.go:190] guest clock delta is within tolerance: 116.329225ms
	I0911 12:27:59.774022 2261580 start.go:83] releasing machines lock for "auto-640433", held for 29.822872895s
	I0911 12:27:59.774050 2261580 main.go:141] libmachine: (auto-640433) Calling .DriverName
	I0911 12:27:59.774366 2261580 main.go:141] libmachine: (auto-640433) Calling .GetIP
	I0911 12:27:59.777521 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.777964 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:59.777995 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.778192 2261580 main.go:141] libmachine: (auto-640433) Calling .DriverName
	I0911 12:27:59.778759 2261580 main.go:141] libmachine: (auto-640433) Calling .DriverName
	I0911 12:27:59.778964 2261580 main.go:141] libmachine: (auto-640433) Calling .DriverName
	I0911 12:27:59.779071 2261580 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:27:59.779130 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHHostname
	I0911 12:27:59.779204 2261580 ssh_runner.go:195] Run: cat /version.json
	I0911 12:27:59.779233 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHHostname
	I0911 12:27:59.782274 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.783121 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:59.783186 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.783208 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.783332 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHPort
	I0911 12:27:59.783592 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:59.783794 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:27:59.783828 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:27:59.783834 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHUsername
	I0911 12:27:59.783983 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHPort
	I0911 12:27:59.784046 2261580 sshutil.go:53] new ssh client: &{IP:192.168.72.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/auto-640433/id_rsa Username:docker}
	I0911 12:27:59.784153 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHKeyPath
	I0911 12:27:59.784298 2261580 main.go:141] libmachine: (auto-640433) Calling .GetSSHUsername
	I0911 12:27:59.784424 2261580 sshutil.go:53] new ssh client: &{IP:192.168.72.226 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/auto-640433/id_rsa Username:docker}
	I0911 12:27:59.901936 2261580 ssh_runner.go:195] Run: systemctl --version
	I0911 12:27:59.909214 2261580 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:28:00.072576 2261580 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:28:00.079391 2261580 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:28:00.079478 2261580 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:28:00.097443 2261580 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:28:00.097472 2261580 start.go:466] detecting cgroup driver to use...
	I0911 12:28:00.097550 2261580 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:28:00.115433 2261580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:28:00.128793 2261580 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:28:00.128873 2261580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:28:00.143237 2261580 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:28:00.157029 2261580 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:28:00.273298 2261580 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:28:00.401907 2261580 docker.go:212] disabling docker service ...
	I0911 12:28:00.402000 2261580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:28:00.416295 2261580 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:28:00.428845 2261580 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:28:00.549576 2261580 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:28:00.678239 2261580 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:28:00.691686 2261580 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:28:00.713258 2261580 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:28:00.713407 2261580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:28:00.726179 2261580 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:28:00.726274 2261580 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:28:00.738955 2261580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:28:00.751204 2261580 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:28:00.763752 2261580 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:28:00.774400 2261580 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:28:00.783735 2261580 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:28:00.783809 2261580 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:28:00.797505 2261580 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:28:00.810440 2261580 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:28:00.930011 2261580 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:28:01.124540 2261580 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:28:01.124644 2261580 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:28:01.130645 2261580 start.go:534] Will wait 60s for crictl version
	I0911 12:28:01.130729 2261580 ssh_runner.go:195] Run: which crictl
	I0911 12:28:01.135120 2261580 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:28:01.174313 2261580 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:28:01.174409 2261580 ssh_runner.go:195] Run: crio --version
	I0911 12:28:01.228694 2261580 ssh_runner.go:195] Run: crio --version
	I0911 12:28:01.290275 2261580 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:27:59.802595 2261957 main.go:141] libmachine: (newest-cni-867563) Calling .Start
	I0911 12:27:59.802782 2261957 main.go:141] libmachine: (newest-cni-867563) Ensuring networks are active...
	I0911 12:27:59.803540 2261957 main.go:141] libmachine: (newest-cni-867563) Ensuring network default is active
	I0911 12:27:59.803877 2261957 main.go:141] libmachine: (newest-cni-867563) Ensuring network mk-newest-cni-867563 is active
	I0911 12:27:59.804269 2261957 main.go:141] libmachine: (newest-cni-867563) Getting domain xml...
	I0911 12:27:59.805015 2261957 main.go:141] libmachine: (newest-cni-867563) Creating domain...
	I0911 12:28:01.223387 2261957 main.go:141] libmachine: (newest-cni-867563) Waiting to get IP...
	I0911 12:28:01.224425 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:01.224898 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:01.224998 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:01.224912 2261992 retry.go:31] will retry after 293.053543ms: waiting for machine to come up
	I0911 12:28:01.519635 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:01.520345 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:01.520504 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:01.520435 2261992 retry.go:31] will retry after 297.72534ms: waiting for machine to come up
	I0911 12:28:01.820374 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:01.820964 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:01.820993 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:01.820910 2261992 retry.go:31] will retry after 294.515951ms: waiting for machine to come up
	I0911 12:28:02.117689 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:02.118883 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:02.118920 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:02.118827 2261992 retry.go:31] will retry after 493.512001ms: waiting for machine to come up
	I0911 12:28:02.614459 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:02.615086 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:02.615131 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:02.615032 2261992 retry.go:31] will retry after 533.421373ms: waiting for machine to come up
	I0911 12:28:03.149870 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:03.150443 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:03.150471 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:03.150349 2261992 retry.go:31] will retry after 819.156001ms: waiting for machine to come up
	I0911 12:28:01.292119 2261580 main.go:141] libmachine: (auto-640433) Calling .GetIP
	I0911 12:28:01.295751 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:28:01.296359 2261580 main.go:141] libmachine: (auto-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:62:ea:89", ip: ""} in network mk-auto-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:27:48 +0000 UTC Type:0 Mac:52:54:00:62:ea:89 Iaid: IPaddr:192.168.72.226 Prefix:24 Hostname:auto-640433 Clientid:01:52:54:00:62:ea:89}
	I0911 12:28:01.296412 2261580 main.go:141] libmachine: (auto-640433) DBG | domain auto-640433 has defined IP address 192.168.72.226 and MAC address 52:54:00:62:ea:89 in network mk-auto-640433
	I0911 12:28:01.296695 2261580 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0911 12:28:01.301926 2261580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:28:01.315510 2261580 localpath.go:92] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/client.crt -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/client.crt
	I0911 12:28:01.315698 2261580 localpath.go:117] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/client.key -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/client.key
	I0911 12:28:01.315838 2261580 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:28:01.315884 2261580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:28:01.348660 2261580 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:28:01.348753 2261580 ssh_runner.go:195] Run: which lz4
	I0911 12:28:01.353563 2261580 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:28:01.358617 2261580 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:28:01.358664 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:28:03.355948 2261580 crio.go:444] Took 2.002442 seconds to copy over tarball
	I0911 12:28:03.356071 2261580 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:28:03.970892 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:03.971342 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:03.971363 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:03.971309 2261992 retry.go:31] will retry after 836.744976ms: waiting for machine to come up
	I0911 12:28:04.809940 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:04.810606 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:04.810647 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:04.810551 2261992 retry.go:31] will retry after 1.444645868s: waiting for machine to come up
	I0911 12:28:06.257365 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:06.257907 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:06.257945 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:06.257852 2261992 retry.go:31] will retry after 1.216860842s: waiting for machine to come up
	I0911 12:28:07.476231 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:07.476677 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:07.476748 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:07.476649 2261992 retry.go:31] will retry after 1.749520396s: waiting for machine to come up
	I0911 12:28:07.055746 2261580 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.699633138s)
	I0911 12:28:07.055782 2261580 crio.go:451] Took 3.699798 seconds to extract the tarball
	I0911 12:28:07.055792 2261580 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:28:07.115495 2261580 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:28:07.181690 2261580 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:28:07.181720 2261580 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:28:07.181834 2261580 ssh_runner.go:195] Run: crio config
	I0911 12:28:07.250118 2261580 cni.go:84] Creating CNI manager for ""
	I0911 12:28:07.250154 2261580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:28:07.250183 2261580 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:28:07.250213 2261580 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.226 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:auto-640433 NodeName:auto-640433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.226"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.226 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuber
netes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:28:07.250434 2261580 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.226
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "auto-640433"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.226
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.226"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:28:07.250537 2261580 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=auto-640433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.226
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:auto-640433 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:28:07.250606 2261580 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:28:07.260966 2261580 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:28:07.261059 2261580 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:28:07.270779 2261580 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (371 bytes)
	I0911 12:28:07.290911 2261580 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:28:07.307654 2261580 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2098 bytes)
	I0911 12:28:07.326633 2261580 ssh_runner.go:195] Run: grep 192.168.72.226	control-plane.minikube.internal$ /etc/hosts
	I0911 12:28:07.331954 2261580 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.226	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:28:07.348083 2261580 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433 for IP: 192.168.72.226
	I0911 12:28:07.348128 2261580 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:28:07.348331 2261580 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:28:07.348407 2261580 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:28:07.348521 2261580 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/client.key
	I0911 12:28:07.348553 2261580 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/apiserver.key.bc4cc361
	I0911 12:28:07.348578 2261580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/apiserver.crt.bc4cc361 with IP's: [192.168.72.226 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 12:28:07.490770 2261580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/apiserver.crt.bc4cc361 ...
	I0911 12:28:07.490801 2261580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/apiserver.crt.bc4cc361: {Name:mk6a9264b2a6cc939052a0a7a9c46cb5080dbf1d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:28:07.491009 2261580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/apiserver.key.bc4cc361 ...
	I0911 12:28:07.491024 2261580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/apiserver.key.bc4cc361: {Name:mkf598706de183e27505ef22d11c89a8fe41eb3e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:28:07.491131 2261580 certs.go:337] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/apiserver.crt.bc4cc361 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/apiserver.crt
	I0911 12:28:07.491230 2261580 certs.go:341] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/apiserver.key.bc4cc361 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/apiserver.key
	I0911 12:28:07.491295 2261580 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/proxy-client.key
	I0911 12:28:07.491317 2261580 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/proxy-client.crt with IP's: []
	I0911 12:28:07.660644 2261580 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/proxy-client.crt ...
	I0911 12:28:07.660680 2261580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/proxy-client.crt: {Name:mka207ebea93d1584e55dd451b862712ed0e229f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:28:07.660909 2261580 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/proxy-client.key ...
	I0911 12:28:07.660928 2261580 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/proxy-client.key: {Name:mkb1328e8cef08a564fc543f3d3dff86d8d50426 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:28:07.661138 2261580 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:28:07.661187 2261580 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:28:07.661205 2261580 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:28:07.661242 2261580 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:28:07.661276 2261580 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:28:07.661315 2261580 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:28:07.661381 2261580 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:28:07.662010 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:28:07.691635 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 12:28:07.717462 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:28:07.747212 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/auto-640433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0911 12:28:07.773823 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:28:07.800286 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:28:07.830027 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:28:07.857726 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:28:07.887163 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:28:07.915922 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:28:07.946385 2261580 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:28:07.977080 2261580 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:28:07.998216 2261580 ssh_runner.go:195] Run: openssl version
	I0911 12:28:08.006233 2261580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:28:08.018809 2261580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:28:08.024616 2261580 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:28:08.024755 2261580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:28:08.031715 2261580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:28:08.042605 2261580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:28:08.053808 2261580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:28:08.059077 2261580 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:28:08.059155 2261580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:28:08.065739 2261580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:28:08.078737 2261580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:28:08.092564 2261580 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:28:08.099145 2261580 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:28:08.099226 2261580 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:28:08.106039 2261580 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:28:08.117054 2261580 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:28:08.121503 2261580 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 12:28:08.121596 2261580 kubeadm.go:404] StartCluster: {Name:auto-640433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.2
8.1 ClusterName:auto-640433 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.72.226 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:28:08.121736 2261580 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:28:08.121795 2261580 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:28:08.156978 2261580 cri.go:89] found id: ""
	I0911 12:28:08.157065 2261580 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:28:08.169028 2261580 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:28:08.179508 2261580 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:28:08.190184 2261580 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:28:08.190263 2261580 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:28:08.402266 2261580 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:28:09.227558 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:09.227993 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:09.228044 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:09.227962 2261992 retry.go:31] will retry after 2.617659797s: waiting for machine to come up
	I0911 12:28:11.847251 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:11.847911 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:11.847948 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:11.847770 2261992 retry.go:31] will retry after 2.553175182s: waiting for machine to come up
	I0911 12:28:14.404041 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:28:14.404613 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:28:14.404648 2261957 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:28:14.404564 2261992 retry.go:31] will retry after 4.189663891s: waiting for machine to come up
	I0911 12:28:20.772591 2261580 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 12:28:20.772672 2261580 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 12:28:20.772782 2261580 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 12:28:20.772927 2261580 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 12:28:20.773059 2261580 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 12:28:20.773151 2261580 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 12:28:20.774831 2261580 out.go:204]   - Generating certificates and keys ...
	I0911 12:28:20.774929 2261580 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 12:28:20.775004 2261580 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 12:28:20.775202 2261580 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 12:28:20.775290 2261580 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 12:28:20.775388 2261580 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 12:28:20.775449 2261580 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 12:28:20.775509 2261580 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 12:28:20.775688 2261580 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [auto-640433 localhost] and IPs [192.168.72.226 127.0.0.1 ::1]
	I0911 12:28:20.775769 2261580 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 12:28:20.775870 2261580 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [auto-640433 localhost] and IPs [192.168.72.226 127.0.0.1 ::1]
	I0911 12:28:20.775944 2261580 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 12:28:20.776001 2261580 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 12:28:20.776054 2261580 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 12:28:20.776109 2261580 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 12:28:20.776179 2261580 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 12:28:20.776252 2261580 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 12:28:20.776344 2261580 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 12:28:20.776423 2261580 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 12:28:20.776556 2261580 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 12:28:20.776657 2261580 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 12:28:20.778428 2261580 out.go:204]   - Booting up control plane ...
	I0911 12:28:20.778556 2261580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 12:28:20.778671 2261580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 12:28:20.778758 2261580 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 12:28:20.778911 2261580 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 12:28:20.779031 2261580 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 12:28:20.779081 2261580 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 12:28:20.779260 2261580 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 12:28:20.779380 2261580 kubeadm.go:322] [apiclient] All control plane components are healthy after 8.002376 seconds
	I0911 12:28:20.779524 2261580 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:28:20.779715 2261580 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:28:20.779796 2261580 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:28:20.779990 2261580 kubeadm.go:322] [mark-control-plane] Marking the node auto-640433 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:28:20.780080 2261580 kubeadm.go:322] [bootstrap-token] Using token: xtok2t.a3cmhwi1s7efxyor
	I0911 12:28:20.781636 2261580 out.go:204]   - Configuring RBAC rules ...
	I0911 12:28:20.781777 2261580 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:28:20.781875 2261580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:28:20.782023 2261580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:28:20.782163 2261580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:28:20.782329 2261580 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:28:20.782448 2261580 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:28:20.782603 2261580 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:28:20.782658 2261580 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:28:20.782720 2261580 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:28:20.782730 2261580 kubeadm.go:322] 
	I0911 12:28:20.782812 2261580 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:28:20.782822 2261580 kubeadm.go:322] 
	I0911 12:28:20.782927 2261580 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:28:20.782941 2261580 kubeadm.go:322] 
	I0911 12:28:20.782976 2261580 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:28:20.783057 2261580 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:28:20.783124 2261580 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:28:20.783133 2261580 kubeadm.go:322] 
	I0911 12:28:20.783196 2261580 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:28:20.783205 2261580 kubeadm.go:322] 
	I0911 12:28:20.783301 2261580 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:28:20.783321 2261580 kubeadm.go:322] 
	I0911 12:28:20.783392 2261580 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:28:20.783514 2261580 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:28:20.783611 2261580 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:28:20.783622 2261580 kubeadm.go:322] 
	I0911 12:28:20.783723 2261580 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:28:20.783827 2261580 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:28:20.783842 2261580 kubeadm.go:322] 
	I0911 12:28:20.783935 2261580 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token xtok2t.a3cmhwi1s7efxyor \
	I0911 12:28:20.784034 2261580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:28:20.784058 2261580 kubeadm.go:322] 	--control-plane 
	I0911 12:28:20.784068 2261580 kubeadm.go:322] 
	I0911 12:28:20.784145 2261580 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:28:20.784152 2261580 kubeadm.go:322] 
	I0911 12:28:20.784258 2261580 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token xtok2t.a3cmhwi1s7efxyor \
	I0911 12:28:20.784431 2261580 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:28:20.784444 2261580 cni.go:84] Creating CNI manager for ""
	I0911 12:28:20.784451 2261580 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:28:20.786351 2261580 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 12:07:23 UTC, ends at Mon 2023-09-11 12:28:23 UTC. --
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.071023011Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2040c4d5-c6e2-4b2f-88db-1c7dd47775f6 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.230795876Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f9864326-b1f0-46ee-9496-eb4c0aab8582 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.230922639Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f9864326-b1f0-46ee-9496-eb4c0aab8582 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.231123431Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f9864326-b1f0-46ee-9496-eb4c0aab8582 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.269381431Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6348a253-8e3e-4e08-9161-6b832476bf7e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.269520363Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6348a253-8e3e-4e08-9161-6b832476bf7e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.269705419Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6348a253-8e3e-4e08-9161-6b832476bf7e name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.310850498Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=f3dc2b9e-67f5-4776-8251-4ebbb1489c80 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.310938754Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=f3dc2b9e-67f5-4776-8251-4ebbb1489c80 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.311112999Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=f3dc2b9e-67f5-4776-8251-4ebbb1489c80 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.348739947Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=616fac90-d5aa-4b29-9476-58393dea99b0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.348861369Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=616fac90-d5aa-4b29-9476-58393dea99b0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.349123013Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=616fac90-d5aa-4b29-9476-58393dea99b0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.387851241Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=236030f8-67d6-4ed7-b3e5-601b43d4a974 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.387940211Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=236030f8-67d6-4ed7-b3e5-601b43d4a974 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.388218638Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=236030f8-67d6-4ed7-b3e5-601b43d4a974 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.425948015Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=30cec784-09fa-4d57-a95e-a2dadba739c1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.426045500Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=30cec784-09fa-4d57-a95e-a2dadba739c1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.426228977Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=30cec784-09fa-4d57-a95e-a2dadba739c1 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.465053218Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=341a0f83-4f87-46eb-80f2-4f4e651070d0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.465137358Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=341a0f83-4f87-46eb-80f2-4f4e651070d0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.465321786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=341a0f83-4f87-46eb-80f2-4f4e651070d0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.504143491Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1a96b103-6256-437e-9f04-5b21f48f6e3b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.504219327Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1a96b103-6256-437e-9f04-5b21f48f6e3b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:28:23 embed-certs-235462 crio[717]: time="2023-09-11 12:28:23.504389580Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429,PodSandboxId:2e6607472212b72856bc09e63b8309d338b79bdfbf9b0e6fda7e950debfbafee,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434385080295588,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1930e88f-3cd5-4235-aefa-106e5d92fcab,},Annotations:map[string]string{io.kubernetes.container.hash: 83ed812e,io.kubernetes.container.restartCount: 0,io.kub
ernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a,PodSandboxId:1999a19e8956a888b0ebbb615b9ef8e437110e297b8a5243283a3a6330d5437f,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434383495269731,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-zlcth,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5b02a945-710a-45aa-94b1-aab1f6f0f685,},Annotations:map[string]string{io.kubernetes.container.hash: e1614254,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239,PodSandboxId:36da79fad0977a16f4794ed3a5707b97c50d12e759e624ac4763fd9d01db583e,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434384256357866,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-hzq9f,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 21c42924-879d-49a2-977d-4b8457984928,},Annotations:map[string]string{io.kubernetes.container.hash: 9b2d0ad2,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-
tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b,PodSandboxId:6a200efd589fdb355f518e741dc1bd1b8785dc597828a5c2a17d0c9925490f8d,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434361014677246,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6c30c214599d59394d727c43876e39a7,},An
notations:map[string]string{io.kubernetes.container.hash: f2b3f266,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713,PodSandboxId:17d58a640b32698e30d0d51834c03928283d4417df60e31eec985436e230108c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434360834497120,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: f9855fe525c
2f0bb84a0934d2e00b5d2,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34,PodSandboxId:afb23c8282c99bdf452a25714e2a0d78b01301a14a37ff9b1bee9a9fd7248679,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434360650094799,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe93b57d720942ea19e29
fbff776ba42,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7,PodSandboxId:dbca37a0722d3257adeb5270d052b69b2b7a21ee02ec2edef1b109316fe04020,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434360487522125,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-235462,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2f74fac50f0aec0f61eeaa4dd82e06
f,},Annotations:map[string]string{io.kubernetes.container.hash: 528230d2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1a96b103-6256-437e-9f04-5b21f48f6e3b name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	e81fbe6b94d58       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   2e6607472212b
	b795df7f42a7c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   36da79fad0977
	2bfb96d3e2a49       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   15 minutes ago      Running             kube-proxy                0                   1999a19e8956a
	0ac50f64245d9       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   15 minutes ago      Running             etcd                      2                   6a200efd589fd
	3fde1e3e93d68       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   15 minutes ago      Running             kube-controller-manager   2                   17d58a640b326
	738708a4c7cb1       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   15 minutes ago      Running             kube-scheduler            2                   afb23c8282c99
	2ba4ad4b835e5       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   15 minutes ago      Running             kube-apiserver            2                   dbca37a0722d3
	
	* 
	* ==> coredns [b795df7f42a7cbc4a303c737a422e2b4f88927af3feb2d08220501ba9435f239] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = b1a455670bb5d7fd89b1330085768be95fff75b4bad96aaa3401a966da5ff6e52d21b0affe6cd8290073027bf8b56132a09c4a2bf21619d1b275a531761a3578
	[INFO] Reloading complete
	[INFO] 127.0.0.1:59226 - 44552 "HINFO IN 2336702580251102645.7041884445010550068. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.009525966s
	
	* 
	* ==> describe nodes <==
	* Name:               embed-certs-235462
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-235462
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=embed-certs-235462
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T12_12_49_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 12:12:45 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-235462
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 12:28:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 12:23:22 +0000   Mon, 11 Sep 2023 12:12:42 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 12:23:22 +0000   Mon, 11 Sep 2023 12:12:42 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 12:23:22 +0000   Mon, 11 Sep 2023 12:12:42 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 12:23:22 +0000   Mon, 11 Sep 2023 12:12:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.96
	  Hostname:    embed-certs-235462
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 d37f3e78025c49b7a144561b8b7550e8
	  System UUID:                d37f3e78-025c-49b7-a144-561b8b7550e8
	  Boot ID:                    1932a667-69e9-491f-b94b-5fa920cc9eb9
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-hzq9f                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-235462                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-235462             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-235462    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-zlcth                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-235462             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-qbrf2               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 15m                kube-proxy       
	  Normal  NodeHasSufficientMemory  15m (x8 over 15m)  kubelet          Node embed-certs-235462 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m (x8 over 15m)  kubelet          Node embed-certs-235462 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m (x7 over 15m)  kubelet          Node embed-certs-235462 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                kubelet          Node embed-certs-235462 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m                kubelet          Node embed-certs-235462 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                kubelet          Node embed-certs-235462 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m                kubelet          Node embed-certs-235462 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 15m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           15m                node-controller  Node embed-certs-235462 event: Registered Node embed-certs-235462 in Controller
	  Normal  NodeReady                15m                kubelet          Node embed-certs-235462 status is now: NodeReady
	
	* 
	* ==> dmesg <==
	* [Sep11 12:07] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.094011] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.722474] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.741675] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.155274] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000002] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.448189] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.199419] systemd-fstab-generator[641]: Ignoring "noauto" for root device
	[  +0.113017] systemd-fstab-generator[652]: Ignoring "noauto" for root device
	[  +0.162353] systemd-fstab-generator[665]: Ignoring "noauto" for root device
	[  +0.117645] systemd-fstab-generator[676]: Ignoring "noauto" for root device
	[  +0.237402] systemd-fstab-generator[700]: Ignoring "noauto" for root device
	[ +17.304865] systemd-fstab-generator[916]: Ignoring "noauto" for root device
	[Sep11 12:08] kauditd_printk_skb: 29 callbacks suppressed
	[Sep11 12:12] systemd-fstab-generator[3576]: Ignoring "noauto" for root device
	[  +9.855657] systemd-fstab-generator[3899]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [0ac50f64245d947b0a571434a95d4c65c9570c5315a4cfb7aafe14cc6b1ba89b] <==
	* {"level":"info","ts":"2023-09-11T12:12:42.657669Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:12:42.657703Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:12:42.657883Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T12:12:42.658081Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T12:12:42.684196Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.96:2379"}
	{"level":"info","ts":"2023-09-11T12:22:42.835357Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":706}
	{"level":"info","ts":"2023-09-11T12:22:42.839647Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":706,"took":"2.999648ms","hash":4146682728}
	{"level":"info","ts":"2023-09-11T12:22:42.839831Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4146682728,"revision":706,"compact-revision":-1}
	{"level":"warn","ts":"2023-09-11T12:27:17.457371Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"118.089465ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12177322742676425341 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:28fe8a8428838a7c>","response":"size:40"}
	{"level":"warn","ts":"2023-09-11T12:27:17.697239Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.101007ms","expected-duration":"100ms","prefix":"","request":"header:<ID:12177322742676425342 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/192.168.50.96\" mod_revision:1165 > success:<request_put:<key:\"/registry/masterleases/192.168.50.96\" value_size:66 lease:2953950705821649532 >> failure:<request_range:<key:\"/registry/masterleases/192.168.50.96\" > >>","response":"size:16"}
	{"level":"info","ts":"2023-09-11T12:27:17.697654Z","caller":"traceutil/trace.go:171","msg":"trace[1867877333] linearizableReadLoop","detail":"{readStateIndex:1363; appliedIndex:1362; }","duration":"162.163305ms","start":"2023-09-11T12:27:17.535382Z","end":"2023-09-11T12:27:17.697545Z","steps":["trace[1867877333] 'read index received'  (duration: 32.805166ms)","trace[1867877333] 'applied index is now lower than readState.Index'  (duration: 129.345384ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-11T12:27:17.697691Z","caller":"traceutil/trace.go:171","msg":"trace[763224368] transaction","detail":"{read_only:false; response_revision:1173; number_of_response:1; }","duration":"238.591497ms","start":"2023-09-11T12:27:17.459047Z","end":"2023-09-11T12:27:17.697639Z","steps":["trace[763224368] 'process raft request'  (duration: 109.207769ms)","trace[763224368] 'compare'  (duration: 127.974351ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-11T12:27:17.69791Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"162.529517ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T12:27:17.697983Z","caller":"traceutil/trace.go:171","msg":"trace[1223052693] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1173; }","duration":"162.625542ms","start":"2023-09-11T12:27:17.535349Z","end":"2023-09-11T12:27:17.697975Z","steps":["trace[1223052693] 'agreement among raft nodes before linearized reading'  (duration: 162.389238ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T12:27:42.846105Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":949}
	{"level":"info","ts":"2023-09-11T12:27:42.848556Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":949,"took":"2.02549ms","hash":3916224659}
	{"level":"info","ts":"2023-09-11T12:27:42.848636Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3916224659,"revision":949,"compact-revision":706}
	{"level":"info","ts":"2023-09-11T12:28:05.790635Z","caller":"traceutil/trace.go:171","msg":"trace[620986663] transaction","detail":"{read_only:false; response_revision:1211; number_of_response:1; }","duration":"218.916767ms","start":"2023-09-11T12:28:05.571677Z","end":"2023-09-11T12:28:05.790594Z","steps":["trace[620986663] 'process raft request'  (duration: 218.596471ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T12:28:07.393065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"204.913303ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.50.96\" ","response":"range_response_count:1 size:133"}
	{"level":"info","ts":"2023-09-11T12:28:07.393241Z","caller":"traceutil/trace.go:171","msg":"trace[1315405170] range","detail":"{range_begin:/registry/masterleases/192.168.50.96; range_end:; response_count:1; response_revision:1212; }","duration":"205.116469ms","start":"2023-09-11T12:28:07.188103Z","end":"2023-09-11T12:28:07.393219Z","steps":["trace[1315405170] 'range keys from in-memory index tree'  (duration: 204.812689ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T12:28:07.563964Z","caller":"traceutil/trace.go:171","msg":"trace[1796290045] transaction","detail":"{read_only:false; response_revision:1214; number_of_response:1; }","duration":"125.980836ms","start":"2023-09-11T12:28:07.437959Z","end":"2023-09-11T12:28:07.56394Z","steps":["trace[1796290045] 'process raft request'  (duration: 61.588086ms)","trace[1796290045] 'compare'  (duration: 64.195839ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-11T12:28:08.015845Z","caller":"traceutil/trace.go:171","msg":"trace[1552192604] linearizableReadLoop","detail":"{readStateIndex:1417; appliedIndex:1416; }","duration":"190.129032ms","start":"2023-09-11T12:28:07.825697Z","end":"2023-09-11T12:28:08.015826Z","steps":["trace[1552192604] 'read index received'  (duration: 189.917166ms)","trace[1552192604] 'applied index is now lower than readState.Index'  (duration: 211.193µs)"],"step_count":2}
	{"level":"warn","ts":"2023-09-11T12:28:08.01614Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.422084ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/networkpolicies/\" range_end:\"/registry/networkpolicies0\" count_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T12:28:08.016222Z","caller":"traceutil/trace.go:171","msg":"trace[2104846508] range","detail":"{range_begin:/registry/networkpolicies/; range_end:/registry/networkpolicies0; response_count:0; response_revision:1215; }","duration":"190.54621ms","start":"2023-09-11T12:28:07.825662Z","end":"2023-09-11T12:28:08.016208Z","steps":["trace[2104846508] 'agreement among raft nodes before linearized reading'  (duration: 190.35102ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T12:28:08.016495Z","caller":"traceutil/trace.go:171","msg":"trace[1005748950] transaction","detail":"{read_only:false; response_revision:1215; number_of_response:1; }","duration":"213.827528ms","start":"2023-09-11T12:28:07.802651Z","end":"2023-09-11T12:28:08.016479Z","steps":["trace[1005748950] 'process raft request'  (duration: 213.016803ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  12:28:23 up 21 min,  0 users,  load average: 0.06, 0.12, 0.15
	Linux embed-certs-235462 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [2ba4ad4b835e55854600df61242be9cee72884f370a3d82517ed6b839e6c92c7] <==
	* I0911 12:25:46.247737       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 12:25:46.251235       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:25:46.251538       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:25:46.251578       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:26:45.120637       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.170.213:443: connect: connection refused
	I0911 12:26:45.120678       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0911 12:27:17.698761       1 trace.go:236] Trace[18107287]: "GuaranteedUpdate etcd3" audit-id:,key:/masterleases/192.168.50.96,type:*v1.Endpoints,resource:apiServerIPInfo (11-Sep-2023 12:27:17.184) (total time: 514ms):
	Trace[18107287]: ---"Transaction prepared" 248ms (12:27:17.458)
	Trace[18107287]: ---"Txn call completed" 240ms (12:27:17.698)
	Trace[18107287]: [514.391903ms] [514.391903ms] END
	I0911 12:27:45.120599       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.170.213:443: connect: connection refused
	I0911 12:27:45.120679       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:27:45.252702       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:27:45.252963       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:27:45.254041       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.101.170.213:443: connect: connection refused
	I0911 12:27:45.254099       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:27:46.254325       1 handler_proxy.go:93] no RequestInfo found in the context
	W0911 12:27:46.254404       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:27:46.254723       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:27:46.254771       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0911 12:27:46.254821       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:27:46.256850       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [3fde1e3e93d683a72125721fde50b1141194d169c2bc3449c2385a47b9121713] <==
	* I0911 12:22:30.637783       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:23:00.186636       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:23:00.649013       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:23:30.194355       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:23:30.659290       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:24:00.202069       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:24:00.670693       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0911 12:24:23.092959       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="332.759µs"
	E0911 12:24:30.208879       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:24:30.685932       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0911 12:24:34.091136       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="276.313µs"
	E0911 12:25:00.217391       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:25:00.697146       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:25:30.224649       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:25:30.707882       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:26:00.232261       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:26:00.718080       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:26:30.239354       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:26:30.728810       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:27:00.248378       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:27:00.742363       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:27:30.256188       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:27:30.753333       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:28:00.264986       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:28:00.767182       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [2bfb96d3e2a4981b6b8e8f074a43798e6fbbfbe98dab77c527e6c1491333605a] <==
	* I0911 12:13:04.886046       1 server_others.go:69] "Using iptables proxy"
	I0911 12:13:04.908867       1 node.go:141] Successfully retrieved node IP: 192.168.50.96
	I0911 12:13:04.979517       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 12:13:04.979594       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 12:13:04.982383       1 server_others.go:152] "Using iptables Proxier"
	I0911 12:13:04.982572       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 12:13:04.982874       1 server.go:846] "Version info" version="v1.28.1"
	I0911 12:13:04.983314       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 12:13:04.985550       1 config.go:188] "Starting service config controller"
	I0911 12:13:04.985663       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 12:13:04.985790       1 config.go:97] "Starting endpoint slice config controller"
	I0911 12:13:04.985925       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 12:13:04.989012       1 config.go:315] "Starting node config controller"
	I0911 12:13:04.989119       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 12:13:05.086951       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 12:13:05.087027       1 shared_informer.go:318] Caches are synced for service config
	I0911 12:13:05.091597       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [738708a4c7cb15461d3b4ab04d06c6645c68e13f7e26fa411e00dcb340bf9b34] <==
	* W0911 12:12:45.267805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 12:12:45.267841       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0911 12:12:45.269853       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 12:12:45.269903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 12:12:46.126632       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0911 12:12:46.126749       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0911 12:12:46.225087       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 12:12:46.225145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0911 12:12:46.250503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 12:12:46.250567       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0911 12:12:46.395717       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 12:12:46.395782       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0911 12:12:46.402979       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 12:12:46.403145       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0911 12:12:46.420524       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0911 12:12:46.420668       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0911 12:12:46.544212       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0911 12:12:46.544270       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0911 12:12:46.551823       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0911 12:12:46.551903       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0911 12:12:46.575998       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 12:12:46.576240       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0911 12:12:46.787380       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 12:12:46.787565       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0911 12:12:50.052409       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 12:07:23 UTC, ends at Mon 2023-09-11 12:28:24 UTC. --
	Sep 11 12:25:49 embed-certs-235462 kubelet[3905]: E0911 12:25:49.151633    3905 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:25:49 embed-certs-235462 kubelet[3905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:25:49 embed-certs-235462 kubelet[3905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:25:49 embed-certs-235462 kubelet[3905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:25:51 embed-certs-235462 kubelet[3905]: E0911 12:25:51.071900    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:26:05 embed-certs-235462 kubelet[3905]: E0911 12:26:05.069711    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:26:18 embed-certs-235462 kubelet[3905]: E0911 12:26:18.071049    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:26:32 embed-certs-235462 kubelet[3905]: E0911 12:26:32.070553    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:26:46 embed-certs-235462 kubelet[3905]: E0911 12:26:46.070768    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:26:49 embed-certs-235462 kubelet[3905]: E0911 12:26:49.155351    3905 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:26:49 embed-certs-235462 kubelet[3905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:26:49 embed-certs-235462 kubelet[3905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:26:49 embed-certs-235462 kubelet[3905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:27:01 embed-certs-235462 kubelet[3905]: E0911 12:27:01.070902    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:27:13 embed-certs-235462 kubelet[3905]: E0911 12:27:13.072887    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:27:24 embed-certs-235462 kubelet[3905]: E0911 12:27:24.069705    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:27:38 embed-certs-235462 kubelet[3905]: E0911 12:27:38.073955    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:27:49 embed-certs-235462 kubelet[3905]: E0911 12:27:49.154933    3905 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:27:49 embed-certs-235462 kubelet[3905]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:27:49 embed-certs-235462 kubelet[3905]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:27:49 embed-certs-235462 kubelet[3905]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:27:49 embed-certs-235462 kubelet[3905]: E0911 12:27:49.323883    3905 container_manager_linux.go:514] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /, memory: /system.slice/kubelet.service"
	Sep 11 12:27:52 embed-certs-235462 kubelet[3905]: E0911 12:27:52.070830    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:28:07 embed-certs-235462 kubelet[3905]: E0911 12:28:07.071176    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	Sep 11 12:28:20 embed-certs-235462 kubelet[3905]: E0911 12:28:20.070280    3905 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-qbrf2" podUID="086e38b9-c5da-4c0a-bed5-a97ffda47d36"
	
	* 
	* ==> storage-provisioner [e81fbe6b94d5805123c48f3c87d20ba81ff05a9e4e69c556f3eacc689ddfc429] <==
	* I0911 12:13:05.280736       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 12:13:05.302612       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 12:13:05.302848       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 12:13:05.346181       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 12:13:05.348155       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-235462_9e61034f-8d48-4595-9ef5-6f168482312d!
	I0911 12:13:05.350633       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"9c314536-3286-4153-950f-1093a98f838f", APIVersion:"v1", ResourceVersion:"451", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-235462_9e61034f-8d48-4595-9ef5-6f168482312d became leader
	I0911 12:13:05.449607       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-235462_9e61034f-8d48-4595-9ef5-6f168482312d!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-235462 -n embed-certs-235462
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-235462 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-qbrf2
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-235462 describe pod metrics-server-57f55c9bc5-qbrf2
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-235462 describe pod metrics-server-57f55c9bc5-qbrf2: exit status 1 (72.966583ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-qbrf2" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-235462 describe pod metrics-server-57f55c9bc5-qbrf2: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (375.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0911 12:22:45.892509 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-11 12:31:15.657721638 +0000 UTC m=+5680.970346528
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-484027 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-484027 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.982µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-484027 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-484027 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-484027 logs -n 25: (1.378370329s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                    Args                    |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p kindnet-640433 sudo cat                 | kindnet-640433            | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC | 11 Sep 23 12:30 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service |                           |         |         |                     |                     |
	| ssh     | -p kindnet-640433 sudo                     | kindnet-640433            | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC | 11 Sep 23 12:30 UTC |
	|         | cri-dockerd --version                      |                           |         |         |                     |                     |
	| ssh     | -p kindnet-640433 sudo                     | kindnet-640433            | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC |                     |
	|         | systemctl status containerd                |                           |         |         |                     |                     |
	|         | --all --full --no-pager                    |                           |         |         |                     |                     |
	| ssh     | -p kindnet-640433 sudo                     | kindnet-640433            | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC | 11 Sep 23 12:30 UTC |
	|         | systemctl cat containerd                   |                           |         |         |                     |                     |
	|         | --no-pager                                 |                           |         |         |                     |                     |
	| ssh     | -p kindnet-640433 sudo cat                 | kindnet-640433            | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC | 11 Sep 23 12:30 UTC |
	|         | /lib/systemd/system/containerd.service     |                           |         |         |                     |                     |
	| ssh     | -p kindnet-640433 sudo cat                 | kindnet-640433            | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC | 11 Sep 23 12:30 UTC |
	|         | /etc/containerd/config.toml                |                           |         |         |                     |                     |
	| ssh     | -p kindnet-640433 sudo                     | kindnet-640433            | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC | 11 Sep 23 12:30 UTC |
	|         | containerd config dump                     |                           |         |         |                     |                     |
	| ssh     | -p kindnet-640433 sudo                     | kindnet-640433            | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC | 11 Sep 23 12:30 UTC |
	|         | systemctl status crio --all                |                           |         |         |                     |                     |
	|         | --full --no-pager                          |                           |         |         |                     |                     |
	| ssh     | -p kindnet-640433 sudo                     | kindnet-640433            | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC | 11 Sep 23 12:30 UTC |
	|         | systemctl cat crio --no-pager              |                           |         |         |                     |                     |
	| ssh     | -p kindnet-640433 sudo find                | kindnet-640433            | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC | 11 Sep 23 12:30 UTC |
	|         | /etc/crio -type f -exec sh -c              |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                           |         |         |                     |                     |
	| ssh     | -p kindnet-640433 sudo crio                | kindnet-640433            | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC | 11 Sep 23 12:30 UTC |
	|         | config                                     |                           |         |         |                     |                     |
	| delete  | -p kindnet-640433                          | kindnet-640433            | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC | 11 Sep 23 12:30 UTC |
	| start   | -p enable-default-cni-640433               | enable-default-cni-640433 | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC |                     |
	|         | --memory=3072                              |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true              |                           |         |         |                     |                     |
	|         | --wait-timeout=15m                         |                           |         |         |                     |                     |
	|         | --enable-default-cni=true                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                              |                           |         |         |                     |                     |
	|         | --container-runtime=crio                   |                           |         |         |                     |                     |
	| ssh     | -p calico-640433 pgrep -a                  | calico-640433             | jenkins | v1.31.2 | 11 Sep 23 12:30 UTC | 11 Sep 23 12:30 UTC |
	|         | kubelet                                    |                           |         |         |                     |                     |
	| ssh     | -p custom-flannel-640433 pgrep             | custom-flannel-640433     | jenkins | v1.31.2 | 11 Sep 23 12:31 UTC | 11 Sep 23 12:31 UTC |
	|         | -a kubelet                                 |                           |         |         |                     |                     |
	| ssh     | -p calico-640433 sudo cat                  | calico-640433             | jenkins | v1.31.2 | 11 Sep 23 12:31 UTC | 11 Sep 23 12:31 UTC |
	|         | /etc/nsswitch.conf                         |                           |         |         |                     |                     |
	| ssh     | -p calico-640433 sudo cat                  | calico-640433             | jenkins | v1.31.2 | 11 Sep 23 12:31 UTC | 11 Sep 23 12:31 UTC |
	|         | /etc/hosts                                 |                           |         |         |                     |                     |
	| ssh     | -p calico-640433 sudo cat                  | calico-640433             | jenkins | v1.31.2 | 11 Sep 23 12:31 UTC | 11 Sep 23 12:31 UTC |
	|         | /etc/resolv.conf                           |                           |         |         |                     |                     |
	| ssh     | -p calico-640433 sudo crictl               | calico-640433             | jenkins | v1.31.2 | 11 Sep 23 12:31 UTC | 11 Sep 23 12:31 UTC |
	|         | pods                                       |                           |         |         |                     |                     |
	| ssh     | -p calico-640433 sudo crictl               | calico-640433             | jenkins | v1.31.2 | 11 Sep 23 12:31 UTC | 11 Sep 23 12:31 UTC |
	|         | ps --all                                   |                           |         |         |                     |                     |
	| ssh     | -p calico-640433 sudo find                 | calico-640433             | jenkins | v1.31.2 | 11 Sep 23 12:31 UTC | 11 Sep 23 12:31 UTC |
	|         | /etc/cni -type f -exec sh -c               |                           |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                       |                           |         |         |                     |                     |
	| ssh     | -p calico-640433 sudo ip a s               | calico-640433             | jenkins | v1.31.2 | 11 Sep 23 12:31 UTC | 11 Sep 23 12:31 UTC |
	| ssh     | -p calico-640433 sudo ip r s               | calico-640433             | jenkins | v1.31.2 | 11 Sep 23 12:31 UTC | 11 Sep 23 12:31 UTC |
	| ssh     | -p calico-640433 sudo                      | calico-640433             | jenkins | v1.31.2 | 11 Sep 23 12:31 UTC | 11 Sep 23 12:31 UTC |
	|         | iptables-save                              |                           |         |         |                     |                     |
	| ssh     | -p calico-640433 sudo iptables             | calico-640433             | jenkins | v1.31.2 | 11 Sep 23 12:31 UTC | 11 Sep 23 12:31 UTC |
	|         | -t nat -L -n -v                            |                           |         |         |                     |                     |
	|---------|--------------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 12:30:24
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 12:30:24.024956 2266399 out.go:296] Setting OutFile to fd 1 ...
	I0911 12:30:24.025116 2266399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:30:24.025126 2266399 out.go:309] Setting ErrFile to fd 2...
	I0911 12:30:24.025132 2266399 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:30:24.025339 2266399 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 12:30:24.025957 2266399 out.go:303] Setting JSON to false
	I0911 12:30:24.027105 2266399 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":238375,"bootTime":1694197049,"procs":316,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 12:30:24.027171 2266399 start.go:138] virtualization: kvm guest
	I0911 12:30:24.029838 2266399 out.go:177] * [enable-default-cni-640433] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 12:30:24.031346 2266399 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 12:30:24.031399 2266399 notify.go:220] Checking for updates...
	I0911 12:30:24.032788 2266399 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 12:30:24.034241 2266399 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:30:24.035990 2266399 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 12:30:24.037574 2266399 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 12:30:24.039035 2266399 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 12:30:24.042171 2266399 config.go:182] Loaded profile config "calico-640433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:30:24.042345 2266399 config.go:182] Loaded profile config "custom-flannel-640433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:30:24.042491 2266399 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:30:24.042641 2266399 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 12:30:24.083855 2266399 out.go:177] * Using the kvm2 driver based on user configuration
	I0911 12:30:24.085493 2266399 start.go:298] selected driver: kvm2
	I0911 12:30:24.085520 2266399 start.go:902] validating driver "kvm2" against <nil>
	I0911 12:30:24.085568 2266399 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 12:30:24.086296 2266399 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:30:24.086388 2266399 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 12:30:24.103028 2266399 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 12:30:24.103083 2266399 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	E0911 12:30:24.103291 2266399 start_flags.go:455] Found deprecated --enable-default-cni flag, setting --cni=bridge
	I0911 12:30:24.103321 2266399 start_flags.go:922] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0911 12:30:24.103369 2266399 cni.go:84] Creating CNI manager for "bridge"
	I0911 12:30:24.103388 2266399 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 12:30:24.103401 2266399 start_flags.go:321] config:
	{Name:enable-default-cni-640433 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-640433 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Con
tainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:30:24.103561 2266399 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:30:24.105433 2266399 out.go:177] * Starting control plane node enable-default-cni-640433 in cluster enable-default-cni-640433
	I0911 12:30:24.106887 2266399 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:30:24.106941 2266399 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 12:30:24.106954 2266399 cache.go:57] Caching tarball of preloaded images
	I0911 12:30:24.107045 2266399 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 12:30:24.107059 2266399 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 12:30:24.107208 2266399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/config.json ...
	I0911 12:30:24.107233 2266399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/config.json: {Name:mk61ddbb1d8c5f4a005ff9d64dfa3f60f5cf7289 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:30:24.107430 2266399 start.go:365] acquiring machines lock for enable-default-cni-640433: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:30:24.107487 2266399 start.go:369] acquired machines lock for "enable-default-cni-640433" in 31.346µs
	I0911 12:30:24.107515 2266399 start.go:93] Provisioning new machine with config: &{Name:enable-default-cni-640433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kube
rnetesConfig:{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-640433 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Mo
untIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:30:24.107628 2266399 start.go:125] createHost starting for "" (driver="kvm2")
	I0911 12:30:21.309556 2263691 node_ready.go:58] node "calico-640433" has status "Ready":"False"
	I0911 12:30:23.359884 2263691 node_ready.go:49] node "calico-640433" has status "Ready":"True"
	I0911 12:30:23.359916 2263691 node_ready.go:38] duration metric: took 11.317060097s waiting for node "calico-640433" to be "Ready" ...
	I0911 12:30:23.359930 2263691 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:30:23.390100 2263691 pod_ready.go:78] waiting up to 15m0s for pod "calico-kube-controllers-7ddc4f45bc-w4t9s" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:24.109676 2266399 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0911 12:30:24.109864 2266399 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:30:24.109918 2266399 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:30:24.126918 2266399 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34417
	I0911 12:30:24.127545 2266399 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:30:24.128231 2266399 main.go:141] libmachine: Using API Version  1
	I0911 12:30:24.128260 2266399 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:30:24.128729 2266399 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:30:24.129005 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetMachineName
	I0911 12:30:24.129233 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .DriverName
	I0911 12:30:24.129388 2266399 start.go:159] libmachine.API.Create for "enable-default-cni-640433" (driver="kvm2")
	I0911 12:30:24.129421 2266399 client.go:168] LocalClient.Create starting
	I0911 12:30:24.129468 2266399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem
	I0911 12:30:24.129517 2266399 main.go:141] libmachine: Decoding PEM data...
	I0911 12:30:24.129541 2266399 main.go:141] libmachine: Parsing certificate...
	I0911 12:30:24.129623 2266399 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem
	I0911 12:30:24.129656 2266399 main.go:141] libmachine: Decoding PEM data...
	I0911 12:30:24.129675 2266399 main.go:141] libmachine: Parsing certificate...
	I0911 12:30:24.129707 2266399 main.go:141] libmachine: Running pre-create checks...
	I0911 12:30:24.129723 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .PreCreateCheck
	I0911 12:30:24.130130 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetConfigRaw
	I0911 12:30:24.130640 2266399 main.go:141] libmachine: Creating machine...
	I0911 12:30:24.130659 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .Create
	I0911 12:30:24.130810 2266399 main.go:141] libmachine: (enable-default-cni-640433) Creating KVM machine...
	I0911 12:30:24.132375 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found existing default KVM network
	I0911 12:30:24.133717 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:24.133550 2266421 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:92:62:7e} reservation:<nil>}
	I0911 12:30:24.135155 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:24.135085 2266421 network.go:209] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000229d50}
	I0911 12:30:24.141140 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | trying to create private KVM network mk-enable-default-cni-640433 192.168.50.0/24...
	I0911 12:30:24.252577 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | private KVM network mk-enable-default-cni-640433 192.168.50.0/24 created
	I0911 12:30:24.252628 2266399 main.go:141] libmachine: (enable-default-cni-640433) Setting up store path in /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433 ...
	I0911 12:30:24.252649 2266399 main.go:141] libmachine: (enable-default-cni-640433) Building disk image from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0911 12:30:24.252672 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:24.252627 2266421 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 12:30:24.252870 2266399 main.go:141] libmachine: (enable-default-cni-640433) Downloading /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0911 12:30:24.517390 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:24.517221 2266421 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433/id_rsa...
	I0911 12:30:24.722932 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:24.722757 2266421 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433/enable-default-cni-640433.rawdisk...
	I0911 12:30:24.722993 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Writing magic tar header
	I0911 12:30:24.723015 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Writing SSH key tar header
	I0911 12:30:24.723030 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:24.722920 2266421 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433 ...
	I0911 12:30:24.723105 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433
	I0911 12:30:24.723144 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines
	I0911 12:30:24.723161 2266399 main.go:141] libmachine: (enable-default-cni-640433) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433 (perms=drwx------)
	I0911 12:30:24.723174 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 12:30:24.723195 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273
	I0911 12:30:24.723211 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0911 12:30:24.723232 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Checking permissions on dir: /home/jenkins
	I0911 12:30:24.723248 2266399 main.go:141] libmachine: (enable-default-cni-640433) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines (perms=drwxr-xr-x)
	I0911 12:30:24.723259 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Checking permissions on dir: /home
	I0911 12:30:24.723274 2266399 main.go:141] libmachine: (enable-default-cni-640433) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube (perms=drwxr-xr-x)
	I0911 12:30:24.723297 2266399 main.go:141] libmachine: (enable-default-cni-640433) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273 (perms=drwxrwxr-x)
	I0911 12:30:24.723319 2266399 main.go:141] libmachine: (enable-default-cni-640433) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0911 12:30:24.723336 2266399 main.go:141] libmachine: (enable-default-cni-640433) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0911 12:30:24.723346 2266399 main.go:141] libmachine: (enable-default-cni-640433) Creating domain...
	I0911 12:30:24.723360 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Skipping /home - not owner
	I0911 12:30:24.724701 2266399 main.go:141] libmachine: (enable-default-cni-640433) define libvirt domain using xml: 
	I0911 12:30:24.724729 2266399 main.go:141] libmachine: (enable-default-cni-640433) <domain type='kvm'>
	I0911 12:30:24.724739 2266399 main.go:141] libmachine: (enable-default-cni-640433)   <name>enable-default-cni-640433</name>
	I0911 12:30:24.724749 2266399 main.go:141] libmachine: (enable-default-cni-640433)   <memory unit='MiB'>3072</memory>
	I0911 12:30:24.724758 2266399 main.go:141] libmachine: (enable-default-cni-640433)   <vcpu>2</vcpu>
	I0911 12:30:24.724776 2266399 main.go:141] libmachine: (enable-default-cni-640433)   <features>
	I0911 12:30:24.724790 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <acpi/>
	I0911 12:30:24.724802 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <apic/>
	I0911 12:30:24.724826 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <pae/>
	I0911 12:30:24.724838 2266399 main.go:141] libmachine: (enable-default-cni-640433)     
	I0911 12:30:24.724946 2266399 main.go:141] libmachine: (enable-default-cni-640433)   </features>
	I0911 12:30:24.724989 2266399 main.go:141] libmachine: (enable-default-cni-640433)   <cpu mode='host-passthrough'>
	I0911 12:30:24.725004 2266399 main.go:141] libmachine: (enable-default-cni-640433)   
	I0911 12:30:24.725016 2266399 main.go:141] libmachine: (enable-default-cni-640433)   </cpu>
	I0911 12:30:24.725045 2266399 main.go:141] libmachine: (enable-default-cni-640433)   <os>
	I0911 12:30:24.725065 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <type>hvm</type>
	I0911 12:30:24.725079 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <boot dev='cdrom'/>
	I0911 12:30:24.725096 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <boot dev='hd'/>
	I0911 12:30:24.725111 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <bootmenu enable='no'/>
	I0911 12:30:24.725120 2266399 main.go:141] libmachine: (enable-default-cni-640433)   </os>
	I0911 12:30:24.725130 2266399 main.go:141] libmachine: (enable-default-cni-640433)   <devices>
	I0911 12:30:24.725145 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <disk type='file' device='cdrom'>
	I0911 12:30:24.725166 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433/boot2docker.iso'/>
	I0911 12:30:24.725181 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <target dev='hdc' bus='scsi'/>
	I0911 12:30:24.725196 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <readonly/>
	I0911 12:30:24.725208 2266399 main.go:141] libmachine: (enable-default-cni-640433)     </disk>
	I0911 12:30:24.725223 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <disk type='file' device='disk'>
	I0911 12:30:24.725248 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0911 12:30:24.725269 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433/enable-default-cni-640433.rawdisk'/>
	I0911 12:30:24.725284 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <target dev='hda' bus='virtio'/>
	I0911 12:30:24.725302 2266399 main.go:141] libmachine: (enable-default-cni-640433)     </disk>
	I0911 12:30:24.725317 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <interface type='network'>
	I0911 12:30:24.725332 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <source network='mk-enable-default-cni-640433'/>
	I0911 12:30:24.725347 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <model type='virtio'/>
	I0911 12:30:24.725360 2266399 main.go:141] libmachine: (enable-default-cni-640433)     </interface>
	I0911 12:30:24.725387 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <interface type='network'>
	I0911 12:30:24.725416 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <source network='default'/>
	I0911 12:30:24.725431 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <model type='virtio'/>
	I0911 12:30:24.725444 2266399 main.go:141] libmachine: (enable-default-cni-640433)     </interface>
	I0911 12:30:24.725460 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <serial type='pty'>
	I0911 12:30:24.725470 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <target port='0'/>
	I0911 12:30:24.725482 2266399 main.go:141] libmachine: (enable-default-cni-640433)     </serial>
	I0911 12:30:24.725491 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <console type='pty'>
	I0911 12:30:24.725511 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <target type='serial' port='0'/>
	I0911 12:30:24.725525 2266399 main.go:141] libmachine: (enable-default-cni-640433)     </console>
	I0911 12:30:24.725538 2266399 main.go:141] libmachine: (enable-default-cni-640433)     <rng model='virtio'>
	I0911 12:30:24.725548 2266399 main.go:141] libmachine: (enable-default-cni-640433)       <backend model='random'>/dev/random</backend>
	I0911 12:30:24.725571 2266399 main.go:141] libmachine: (enable-default-cni-640433)     </rng>
	I0911 12:30:24.725582 2266399 main.go:141] libmachine: (enable-default-cni-640433)     
	I0911 12:30:24.725597 2266399 main.go:141] libmachine: (enable-default-cni-640433)     
	I0911 12:30:24.725610 2266399 main.go:141] libmachine: (enable-default-cni-640433)   </devices>
	I0911 12:30:24.725623 2266399 main.go:141] libmachine: (enable-default-cni-640433) </domain>
	I0911 12:30:24.725636 2266399 main.go:141] libmachine: (enable-default-cni-640433) 
	I0911 12:30:24.730337 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:11:14:02 in network default
	I0911 12:30:24.731118 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:24.731144 2266399 main.go:141] libmachine: (enable-default-cni-640433) Ensuring networks are active...
	I0911 12:30:24.732055 2266399 main.go:141] libmachine: (enable-default-cni-640433) Ensuring network default is active
	I0911 12:30:24.732560 2266399 main.go:141] libmachine: (enable-default-cni-640433) Ensuring network mk-enable-default-cni-640433 is active
	I0911 12:30:24.733262 2266399 main.go:141] libmachine: (enable-default-cni-640433) Getting domain xml...
	I0911 12:30:24.734391 2266399 main.go:141] libmachine: (enable-default-cni-640433) Creating domain...
	I0911 12:30:26.360707 2266399 main.go:141] libmachine: (enable-default-cni-640433) Waiting to get IP...
	I0911 12:30:26.362206 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:26.362976 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:26.363230 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:26.363105 2266421 retry.go:31] will retry after 248.107229ms: waiting for machine to come up
	I0911 12:30:26.613129 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:26.613910 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:26.613940 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:26.613804 2266421 retry.go:31] will retry after 299.458723ms: waiting for machine to come up
	I0911 12:30:26.915330 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:26.915949 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:26.915977 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:26.915878 2266421 retry.go:31] will retry after 458.780142ms: waiting for machine to come up
	I0911 12:30:27.376462 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:27.377123 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:27.377150 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:27.377064 2266421 retry.go:31] will retry after 420.326352ms: waiting for machine to come up
	I0911 12:30:27.799582 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:27.800016 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:27.800104 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:27.800051 2266421 retry.go:31] will retry after 650.541015ms: waiting for machine to come up
	I0911 12:30:28.451977 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:28.452534 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:28.452599 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:28.452489 2266421 retry.go:31] will retry after 747.983956ms: waiting for machine to come up
	I0911 12:30:28.665173 2264591 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.503792 seconds
	I0911 12:30:28.665376 2264591 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:30:28.690032 2264591 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:30:29.455895 2264591 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:30:29.456152 2264591 kubeadm.go:322] [mark-control-plane] Marking the node custom-flannel-640433 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:30:29.970411 2264591 kubeadm.go:322] [bootstrap-token] Using token: 7xs1ld.mwjrjb9rdnuqrmlg
	I0911 12:30:25.490969 2263691 pod_ready.go:102] pod "calico-kube-controllers-7ddc4f45bc-w4t9s" in "kube-system" namespace has status "Ready":"False"
	I0911 12:30:27.997978 2263691 pod_ready.go:102] pod "calico-kube-controllers-7ddc4f45bc-w4t9s" in "kube-system" namespace has status "Ready":"False"
	I0911 12:30:29.998045 2263691 pod_ready.go:102] pod "calico-kube-controllers-7ddc4f45bc-w4t9s" in "kube-system" namespace has status "Ready":"False"
	I0911 12:30:29.972123 2264591 out.go:204]   - Configuring RBAC rules ...
	I0911 12:30:29.972294 2264591 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:30:29.981778 2264591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:30:29.994189 2264591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:30:30.002113 2264591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:30:30.012002 2264591 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:30:30.017993 2264591 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:30:30.041894 2264591 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:30:30.356278 2264591 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:30:30.417878 2264591 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:30:30.418910 2264591 kubeadm.go:322] 
	I0911 12:30:30.419030 2264591 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:30:30.419053 2264591 kubeadm.go:322] 
	I0911 12:30:30.419167 2264591 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:30:30.419179 2264591 kubeadm.go:322] 
	I0911 12:30:30.419210 2264591 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:30:30.419315 2264591 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:30:30.419390 2264591 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:30:30.419397 2264591 kubeadm.go:322] 
	I0911 12:30:30.419475 2264591 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:30:30.419497 2264591 kubeadm.go:322] 
	I0911 12:30:30.419605 2264591 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:30:30.419615 2264591 kubeadm.go:322] 
	I0911 12:30:30.419694 2264591 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:30:30.419815 2264591 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:30:30.419923 2264591 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:30:30.419936 2264591 kubeadm.go:322] 
	I0911 12:30:30.420052 2264591 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:30:30.420155 2264591 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:30:30.420166 2264591 kubeadm.go:322] 
	I0911 12:30:30.420303 2264591 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 7xs1ld.mwjrjb9rdnuqrmlg \
	I0911 12:30:30.420454 2264591 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:30:30.420490 2264591 kubeadm.go:322] 	--control-plane 
	I0911 12:30:30.420499 2264591 kubeadm.go:322] 
	I0911 12:30:30.420653 2264591 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:30:30.420664 2264591 kubeadm.go:322] 
	I0911 12:30:30.420785 2264591 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 7xs1ld.mwjrjb9rdnuqrmlg \
	I0911 12:30:30.420944 2264591 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:30:30.421301 2264591 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:30:30.421329 2264591 cni.go:84] Creating CNI manager for "testdata/kube-flannel.yaml"
	I0911 12:30:30.423459 2264591 out.go:177] * Configuring testdata/kube-flannel.yaml (Container Networking Interface) ...
	I0911 12:30:30.425546 2264591 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.1/kubectl ...
	I0911 12:30:30.425637 2264591 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml
	I0911 12:30:30.446097 2264591 ssh_runner.go:352] existence check for /var/tmp/minikube/cni.yaml: stat -c "%!s(MISSING) %!y(MISSING)" /var/tmp/minikube/cni.yaml: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/tmp/minikube/cni.yaml': No such file or directory
	I0911 12:30:30.446141 2264591 ssh_runner.go:362] scp testdata/kube-flannel.yaml --> /var/tmp/minikube/cni.yaml (4591 bytes)
	I0911 12:30:30.493749 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0911 12:30:31.902404 2264591 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.408608304s)
	I0911 12:30:31.902486 2264591 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:30:31.902572 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:31.902621 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=custom-flannel-640433 minikube.k8s.io/updated_at=2023_09_11T12_30_31_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:32.067679 2264591 ops.go:34] apiserver oom_adj: -16
	I0911 12:30:32.067839 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:32.175483 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:29.202161 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:29.202734 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:29.202766 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:29.202694 2266421 retry.go:31] will retry after 721.010575ms: waiting for machine to come up
	I0911 12:30:29.925697 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:29.926247 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:29.926280 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:29.926198 2266421 retry.go:31] will retry after 1.377005661s: waiting for machine to come up
	I0911 12:30:31.304716 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:31.305405 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:31.305440 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:31.305305 2266421 retry.go:31] will retry after 1.4057945s: waiting for machine to come up
	I0911 12:30:32.712522 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:32.713102 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:32.713126 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:32.713051 2266421 retry.go:31] will retry after 2.307495653s: waiting for machine to come up
	I0911 12:30:32.490014 2263691 pod_ready.go:102] pod "calico-kube-controllers-7ddc4f45bc-w4t9s" in "kube-system" namespace has status "Ready":"False"
	I0911 12:30:34.492328 2263691 pod_ready.go:102] pod "calico-kube-controllers-7ddc4f45bc-w4t9s" in "kube-system" namespace has status "Ready":"False"
	I0911 12:30:32.781764 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:33.281728 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:33.781952 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:34.281020 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:34.781028 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:35.281997 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:35.781743 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:36.281654 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:36.781817 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:37.281511 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:35.022268 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:35.022832 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:35.022862 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:35.022774 2266421 retry.go:31] will retry after 2.498915285s: waiting for machine to come up
	I0911 12:30:37.523033 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:37.523683 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:37.523709 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:37.523601 2266421 retry.go:31] will retry after 2.369342485s: waiting for machine to come up
	I0911 12:30:36.494040 2263691 pod_ready.go:102] pod "calico-kube-controllers-7ddc4f45bc-w4t9s" in "kube-system" namespace has status "Ready":"False"
	I0911 12:30:38.993713 2263691 pod_ready.go:102] pod "calico-kube-controllers-7ddc4f45bc-w4t9s" in "kube-system" namespace has status "Ready":"False"
	I0911 12:30:37.781875 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:38.281490 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:38.781741 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:39.281026 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:39.781814 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:40.281542 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:40.781077 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:41.281773 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:41.781856 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:42.282004 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:39.894306 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:39.894994 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:39.895024 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:39.894967 2266421 retry.go:31] will retry after 4.226630073s: waiting for machine to come up
	I0911 12:30:42.781734 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:43.281086 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:43.781697 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:44.281661 2264591 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:30:44.420971 2264591 kubeadm.go:1081] duration metric: took 12.518467497s to wait for elevateKubeSystemPrivileges.
	I0911 12:30:44.421011 2264591 kubeadm.go:406] StartCluster complete in 29.647968478s
	I0911 12:30:44.421057 2264591 settings.go:142] acquiring lock: {Name:mk4310ab91bbe46650bedc3e0e283bef2f18851f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:30:44.421163 2264591 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:30:44.422707 2264591 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/kubeconfig: {Name:mka1bf8543d8a2515d9f06b8183642905a57fc92 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:30:44.422945 2264591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0911 12:30:44.422954 2264591 addons.go:499] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false volumesnapshots:false]
	I0911 12:30:44.423049 2264591 addons.go:69] Setting default-storageclass=true in profile "custom-flannel-640433"
	I0911 12:30:44.423076 2264591 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "custom-flannel-640433"
	I0911 12:30:44.423049 2264591 addons.go:69] Setting storage-provisioner=true in profile "custom-flannel-640433"
	I0911 12:30:44.423148 2264591 config.go:182] Loaded profile config "custom-flannel-640433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:30:44.423192 2264591 addons.go:231] Setting addon storage-provisioner=true in "custom-flannel-640433"
	I0911 12:30:44.423267 2264591 host.go:66] Checking if "custom-flannel-640433" exists ...
	I0911 12:30:44.423628 2264591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:30:44.423664 2264591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:30:44.423666 2264591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:30:44.423691 2264591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:30:44.444246 2264591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45919
	I0911 12:30:44.444246 2264591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39665
	I0911 12:30:44.444877 2264591 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:30:44.444882 2264591 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:30:44.445492 2264591 main.go:141] libmachine: Using API Version  1
	I0911 12:30:44.445519 2264591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:30:44.445713 2264591 main.go:141] libmachine: Using API Version  1
	I0911 12:30:44.445735 2264591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:30:44.446113 2264591 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:30:44.446651 2264591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:30:44.446701 2264591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:30:44.447074 2264591 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:30:44.447305 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .GetState
	I0911 12:30:44.463918 2264591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38807
	I0911 12:30:44.464438 2264591 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:30:44.465063 2264591 main.go:141] libmachine: Using API Version  1
	I0911 12:30:44.465092 2264591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:30:44.465509 2264591 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:30:44.465750 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .GetState
	I0911 12:30:44.467663 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .DriverName
	I0911 12:30:44.469779 2264591 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0911 12:30:44.469648 2264591 addons.go:231] Setting addon default-storageclass=true in "custom-flannel-640433"
	I0911 12:30:44.471420 2264591 addons.go:423] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:30:44.471439 2264591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0911 12:30:44.471464 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .GetSSHHostname
	I0911 12:30:44.471466 2264591 host.go:66] Checking if "custom-flannel-640433" exists ...
	I0911 12:30:44.471900 2264591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:30:44.471931 2264591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:30:44.475254 2264591 main.go:141] libmachine: (custom-flannel-640433) DBG | domain custom-flannel-640433 has defined MAC address 52:54:00:6a:5a:2f in network mk-custom-flannel-640433
	I0911 12:30:44.475524 2264591 kapi.go:248] "coredns" deployment in "kube-system" namespace and "custom-flannel-640433" context rescaled to 1 replicas
	I0911 12:30:44.475559 2264591 start.go:223] Will wait 15m0s for node &{Name: IP:192.168.72.232 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:30:44.477352 2264591 out.go:177] * Verifying Kubernetes components...
	I0911 12:30:41.494681 2263691 pod_ready.go:102] pod "calico-kube-controllers-7ddc4f45bc-w4t9s" in "kube-system" namespace has status "Ready":"False"
	I0911 12:30:43.534464 2263691 pod_ready.go:92] pod "calico-kube-controllers-7ddc4f45bc-w4t9s" in "kube-system" namespace has status "Ready":"True"
	I0911 12:30:43.534494 2263691 pod_ready.go:81] duration metric: took 20.144359918s waiting for pod "calico-kube-controllers-7ddc4f45bc-w4t9s" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:43.534511 2263691 pod_ready.go:78] waiting up to 15m0s for pod "calico-node-l29h2" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:43.548615 2263691 pod_ready.go:92] pod "calico-node-l29h2" in "kube-system" namespace has status "Ready":"True"
	I0911 12:30:43.548643 2263691 pod_ready.go:81] duration metric: took 14.124573ms waiting for pod "calico-node-l29h2" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:43.548656 2263691 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-gkwst" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:43.556736 2263691 pod_ready.go:92] pod "coredns-5dd5756b68-gkwst" in "kube-system" namespace has status "Ready":"True"
	I0911 12:30:43.556764 2263691 pod_ready.go:81] duration metric: took 8.099949ms waiting for pod "coredns-5dd5756b68-gkwst" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:43.556778 2263691 pod_ready.go:78] waiting up to 15m0s for pod "etcd-calico-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:43.566807 2263691 pod_ready.go:92] pod "etcd-calico-640433" in "kube-system" namespace has status "Ready":"True"
	I0911 12:30:43.566830 2263691 pod_ready.go:81] duration metric: took 10.044784ms waiting for pod "etcd-calico-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:43.566840 2263691 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-calico-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:43.574578 2263691 pod_ready.go:92] pod "kube-apiserver-calico-640433" in "kube-system" namespace has status "Ready":"True"
	I0911 12:30:43.574602 2263691 pod_ready.go:81] duration metric: took 7.756762ms waiting for pod "kube-apiserver-calico-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:43.574613 2263691 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-calico-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:43.885523 2263691 pod_ready.go:92] pod "kube-controller-manager-calico-640433" in "kube-system" namespace has status "Ready":"True"
	I0911 12:30:43.885545 2263691 pod_ready.go:81] duration metric: took 310.926694ms waiting for pod "kube-controller-manager-calico-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:43.885556 2263691 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-rxchg" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:44.287210 2263691 pod_ready.go:92] pod "kube-proxy-rxchg" in "kube-system" namespace has status "Ready":"True"
	I0911 12:30:44.287239 2263691 pod_ready.go:81] duration metric: took 401.675959ms waiting for pod "kube-proxy-rxchg" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:44.287252 2263691 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-calico-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:44.687687 2263691 pod_ready.go:92] pod "kube-scheduler-calico-640433" in "kube-system" namespace has status "Ready":"True"
	I0911 12:30:44.687718 2263691 pod_ready.go:81] duration metric: took 400.457591ms waiting for pod "kube-scheduler-calico-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:44.687732 2263691 pod_ready.go:38] duration metric: took 21.327786364s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:30:44.687753 2263691 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:30:44.687818 2263691 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:30:44.710178 2263691 api_server.go:72] duration metric: took 32.828453603s to wait for apiserver process to appear ...
	I0911 12:30:44.710204 2263691 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:30:44.710222 2263691 api_server.go:253] Checking apiserver healthz at https://192.168.61.56:8443/healthz ...
	I0911 12:30:44.720517 2263691 api_server.go:279] https://192.168.61.56:8443/healthz returned 200:
	ok
	I0911 12:30:44.722548 2263691 api_server.go:141] control plane version: v1.28.1
	I0911 12:30:44.722580 2263691 api_server.go:131] duration metric: took 12.370111ms to wait for apiserver health ...
	I0911 12:30:44.722590 2263691 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:30:44.892198 2263691 system_pods.go:59] 9 kube-system pods found
	I0911 12:30:44.892245 2263691 system_pods.go:61] "calico-kube-controllers-7ddc4f45bc-w4t9s" [c4482abf-c3fb-4d32-9f4f-cf056f5123ea] Running
	I0911 12:30:44.892254 2263691 system_pods.go:61] "calico-node-l29h2" [beebf21e-8516-46f4-a33f-7776aab40984] Running
	I0911 12:30:44.892261 2263691 system_pods.go:61] "coredns-5dd5756b68-gkwst" [96ab39e5-c5c0-4d04-a647-8c2db0b69264] Running
	I0911 12:30:44.892268 2263691 system_pods.go:61] "etcd-calico-640433" [02d397e3-fc95-484f-a537-0dff4ba550b1] Running
	I0911 12:30:44.892282 2263691 system_pods.go:61] "kube-apiserver-calico-640433" [37a347d0-f483-44d1-ba19-8751a09bb2d2] Running
	I0911 12:30:44.892300 2263691 system_pods.go:61] "kube-controller-manager-calico-640433" [bc9af3c3-860c-499b-9bd3-4fc55f1e7077] Running
	I0911 12:30:44.892306 2263691 system_pods.go:61] "kube-proxy-rxchg" [3f35dcd1-05a0-450b-ba9f-e41d81cd6db6] Running
	I0911 12:30:44.892315 2263691 system_pods.go:61] "kube-scheduler-calico-640433" [3b142435-4ef3-4fd4-b6cb-9fc1c352e47c] Running
	I0911 12:30:44.892327 2263691 system_pods.go:61] "storage-provisioner" [f4f2f3e2-16a7-4d77-8cc5-23b59fb3d6b6] Running
	I0911 12:30:44.892338 2263691 system_pods.go:74] duration metric: took 169.74186ms to wait for pod list to return data ...
	I0911 12:30:44.892351 2263691 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:30:45.085554 2263691 default_sa.go:45] found service account: "default"
	I0911 12:30:45.085599 2263691 default_sa.go:55] duration metric: took 193.237747ms for default service account to be created ...
	I0911 12:30:45.085613 2263691 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:30:45.292074 2263691 system_pods.go:86] 9 kube-system pods found
	I0911 12:30:45.292120 2263691 system_pods.go:89] "calico-kube-controllers-7ddc4f45bc-w4t9s" [c4482abf-c3fb-4d32-9f4f-cf056f5123ea] Running
	I0911 12:30:45.292131 2263691 system_pods.go:89] "calico-node-l29h2" [beebf21e-8516-46f4-a33f-7776aab40984] Running
	I0911 12:30:45.292140 2263691 system_pods.go:89] "coredns-5dd5756b68-gkwst" [96ab39e5-c5c0-4d04-a647-8c2db0b69264] Running
	I0911 12:30:45.292148 2263691 system_pods.go:89] "etcd-calico-640433" [02d397e3-fc95-484f-a537-0dff4ba550b1] Running
	I0911 12:30:45.292165 2263691 system_pods.go:89] "kube-apiserver-calico-640433" [37a347d0-f483-44d1-ba19-8751a09bb2d2] Running
	I0911 12:30:45.292176 2263691 system_pods.go:89] "kube-controller-manager-calico-640433" [bc9af3c3-860c-499b-9bd3-4fc55f1e7077] Running
	I0911 12:30:45.292188 2263691 system_pods.go:89] "kube-proxy-rxchg" [3f35dcd1-05a0-450b-ba9f-e41d81cd6db6] Running
	I0911 12:30:45.292199 2263691 system_pods.go:89] "kube-scheduler-calico-640433" [3b142435-4ef3-4fd4-b6cb-9fc1c352e47c] Running
	I0911 12:30:45.292209 2263691 system_pods.go:89] "storage-provisioner" [f4f2f3e2-16a7-4d77-8cc5-23b59fb3d6b6] Running
	I0911 12:30:45.292223 2263691 system_pods.go:126] duration metric: took 206.603289ms to wait for k8s-apps to be running ...
	I0911 12:30:45.292237 2263691 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:30:45.292317 2263691 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:30:45.309771 2263691 system_svc.go:56] duration metric: took 17.519115ms WaitForService to wait for kubelet.
	I0911 12:30:45.309807 2263691 kubeadm.go:581] duration metric: took 33.428088432s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:30:45.309832 2263691 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:30:45.486516 2263691 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:30:45.486561 2263691 node_conditions.go:123] node cpu capacity is 2
	I0911 12:30:45.486578 2263691 node_conditions.go:105] duration metric: took 176.739964ms to run NodePressure ...
	I0911 12:30:45.486593 2263691 start.go:228] waiting for startup goroutines ...
	I0911 12:30:45.486603 2263691 start.go:233] waiting for cluster config update ...
	I0911 12:30:45.486616 2263691 start.go:242] writing updated cluster config ...
	I0911 12:30:45.487003 2263691 ssh_runner.go:195] Run: rm -f paused
	I0911 12:30:45.543895 2263691 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:30:45.546297 2263691 out.go:177] * Done! kubectl is now configured to use "calico-640433" cluster and "default" namespace by default
	I0911 12:30:44.475770 2264591 main.go:141] libmachine: (custom-flannel-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:5a:2f", ip: ""} in network mk-custom-flannel-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:29:54 +0000 UTC Type:0 Mac:52:54:00:6a:5a:2f Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:custom-flannel-640433 Clientid:01:52:54:00:6a:5a:2f}
	I0911 12:30:44.475982 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .GetSSHPort
	I0911 12:30:44.479395 2264591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:30:44.479422 2264591 main.go:141] libmachine: (custom-flannel-640433) DBG | domain custom-flannel-640433 has defined IP address 192.168.72.232 and MAC address 52:54:00:6a:5a:2f in network mk-custom-flannel-640433
	I0911 12:30:44.479591 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .GetSSHKeyPath
	I0911 12:30:44.479815 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .GetSSHUsername
	I0911 12:30:44.480022 2264591 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/custom-flannel-640433/id_rsa Username:docker}
	I0911 12:30:44.489204 2264591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45831
	I0911 12:30:44.489630 2264591 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:30:44.490114 2264591 main.go:141] libmachine: Using API Version  1
	I0911 12:30:44.490141 2264591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:30:44.490457 2264591 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:30:44.491094 2264591 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:30:44.491147 2264591 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:30:44.507192 2264591 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34645
	I0911 12:30:44.507771 2264591 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:30:44.508388 2264591 main.go:141] libmachine: Using API Version  1
	I0911 12:30:44.508415 2264591 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:30:44.508890 2264591 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:30:44.509164 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .GetState
	I0911 12:30:44.511110 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .DriverName
	I0911 12:30:44.511403 2264591 addons.go:423] installing /etc/kubernetes/addons/storageclass.yaml
	I0911 12:30:44.511426 2264591 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0911 12:30:44.511449 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .GetSSHHostname
	I0911 12:30:44.514395 2264591 main.go:141] libmachine: (custom-flannel-640433) DBG | domain custom-flannel-640433 has defined MAC address 52:54:00:6a:5a:2f in network mk-custom-flannel-640433
	I0911 12:30:44.514799 2264591 main.go:141] libmachine: (custom-flannel-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:6a:5a:2f", ip: ""} in network mk-custom-flannel-640433: {Iface:virbr2 ExpiryTime:2023-09-11 13:29:54 +0000 UTC Type:0 Mac:52:54:00:6a:5a:2f Iaid: IPaddr:192.168.72.232 Prefix:24 Hostname:custom-flannel-640433 Clientid:01:52:54:00:6a:5a:2f}
	I0911 12:30:44.514840 2264591 main.go:141] libmachine: (custom-flannel-640433) DBG | domain custom-flannel-640433 has defined IP address 192.168.72.232 and MAC address 52:54:00:6a:5a:2f in network mk-custom-flannel-640433
	I0911 12:30:44.515010 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .GetSSHPort
	I0911 12:30:44.515219 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .GetSSHKeyPath
	I0911 12:30:44.515382 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .GetSSHUsername
	I0911 12:30:44.515521 2264591 sshutil.go:53] new ssh client: &{IP:192.168.72.232 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/custom-flannel-640433/id_rsa Username:docker}
	I0911 12:30:44.644545 2264591 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.72.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0911 12:30:44.645445 2264591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0911 12:30:44.645472 2264591 node_ready.go:35] waiting up to 15m0s for node "custom-flannel-640433" to be "Ready" ...
	I0911 12:30:44.778120 2264591 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0911 12:30:45.501185 2264591 start.go:901] {"host.minikube.internal": 192.168.72.1} host record injected into CoreDNS's ConfigMap
	I0911 12:30:45.801273 2264591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.155783845s)
	I0911 12:30:45.801327 2264591 main.go:141] libmachine: Making call to close driver server
	I0911 12:30:45.801342 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .Close
	I0911 12:30:45.801357 2264591 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.023194189s)
	I0911 12:30:45.801407 2264591 main.go:141] libmachine: Making call to close driver server
	I0911 12:30:45.801426 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .Close
	I0911 12:30:45.801692 2264591 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:30:45.801713 2264591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:30:45.801724 2264591 main.go:141] libmachine: Making call to close driver server
	I0911 12:30:45.801735 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .Close
	I0911 12:30:45.801894 2264591 main.go:141] libmachine: (custom-flannel-640433) DBG | Closing plugin on server side
	I0911 12:30:45.801921 2264591 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:30:45.801936 2264591 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:30:45.801936 2264591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:30:45.801945 2264591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:30:45.801949 2264591 main.go:141] libmachine: Making call to close driver server
	I0911 12:30:45.801963 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .Close
	I0911 12:30:45.802199 2264591 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:30:45.802213 2264591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:30:45.802233 2264591 main.go:141] libmachine: Making call to close driver server
	I0911 12:30:45.802243 2264591 main.go:141] libmachine: (custom-flannel-640433) Calling .Close
	I0911 12:30:45.802579 2264591 main.go:141] libmachine: Successfully made call to close driver server
	I0911 12:30:45.802594 2264591 main.go:141] libmachine: Making call to close connection to plugin binary
	I0911 12:30:45.804550 2264591 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0911 12:30:45.806083 2264591 addons.go:502] enable addons completed in 1.383124952s: enabled=[storage-provisioner default-storageclass]
	I0911 12:30:46.669640 2264591 node_ready.go:58] node "custom-flannel-640433" has status "Ready":"False"
	I0911 12:30:44.126125 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:44.126688 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find current IP address of domain enable-default-cni-640433 in network mk-enable-default-cni-640433
	I0911 12:30:44.126729 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | I0911 12:30:44.126573 2266421 retry.go:31] will retry after 4.180905386s: waiting for machine to come up
	I0911 12:30:48.310371 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:48.310922 2266399 main.go:141] libmachine: (enable-default-cni-640433) Found IP for machine: 192.168.50.82
	I0911 12:30:48.310954 2266399 main.go:141] libmachine: (enable-default-cni-640433) Reserving static IP address...
	I0911 12:30:48.310972 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has current primary IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:48.311337 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | unable to find host DHCP lease matching {name: "enable-default-cni-640433", mac: "52:54:00:b5:81:bd", ip: "192.168.50.82"} in network mk-enable-default-cni-640433
	I0911 12:30:48.419144 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Getting to WaitForSSH function...
	I0911 12:30:48.419195 2266399 main.go:141] libmachine: (enable-default-cni-640433) Reserved static IP address: 192.168.50.82
	I0911 12:30:48.419213 2266399 main.go:141] libmachine: (enable-default-cni-640433) Waiting for SSH to be available...
	I0911 12:30:48.422946 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:48.423576 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:minikube Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:48.423614 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:48.423819 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Using SSH client type: external
	I0911 12:30:48.423873 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433/id_rsa (-rw-------)
	I0911 12:30:48.423906 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.82 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:30:48.423934 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | About to run SSH command:
	I0911 12:30:48.423945 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | exit 0
	I0911 12:30:48.533216 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | SSH cmd err, output: <nil>: 
	I0911 12:30:48.533554 2266399 main.go:141] libmachine: (enable-default-cni-640433) KVM machine creation complete!
	I0911 12:30:48.533854 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetConfigRaw
	I0911 12:30:48.534504 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .DriverName
	I0911 12:30:48.534739 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .DriverName
	I0911 12:30:48.534940 2266399 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0911 12:30:48.534956 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetState
	I0911 12:30:48.536557 2266399 main.go:141] libmachine: Detecting operating system of created instance...
	I0911 12:30:48.536576 2266399 main.go:141] libmachine: Waiting for SSH to be available...
	I0911 12:30:48.536586 2266399 main.go:141] libmachine: Getting to WaitForSSH function...
	I0911 12:30:48.536597 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHHostname
	I0911 12:30:48.539825 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:48.540522 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:48.540562 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:48.540718 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHPort
	I0911 12:30:48.540936 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:48.541142 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:48.541304 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHUsername
	I0911 12:30:48.541479 2266399 main.go:141] libmachine: Using SSH client type: native
	I0911 12:30:48.542159 2266399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.82 22 <nil> <nil>}
	I0911 12:30:48.542182 2266399 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0911 12:30:48.677204 2266399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:30:48.677241 2266399 main.go:141] libmachine: Detecting the provisioner...
	I0911 12:30:48.677254 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHHostname
	I0911 12:30:48.680443 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:48.680901 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:48.680966 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:48.681148 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHPort
	I0911 12:30:48.681380 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:48.681642 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:48.681817 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHUsername
	I0911 12:30:48.682157 2266399 main.go:141] libmachine: Using SSH client type: native
	I0911 12:30:48.682638 2266399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.82 22 <nil> <nil>}
	I0911 12:30:48.682653 2266399 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0911 12:30:48.831080 2266399 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0911 12:30:48.831235 2266399 main.go:141] libmachine: found compatible host: buildroot
	I0911 12:30:48.831254 2266399 main.go:141] libmachine: Provisioning with buildroot...
	I0911 12:30:48.831269 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetMachineName
	I0911 12:30:48.831600 2266399 buildroot.go:166] provisioning hostname "enable-default-cni-640433"
	I0911 12:30:48.831635 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetMachineName
	I0911 12:30:48.831872 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHHostname
	I0911 12:30:48.835126 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:48.835518 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:48.835555 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:48.835700 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHPort
	I0911 12:30:48.835856 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:48.836050 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:48.836214 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHUsername
	I0911 12:30:48.836456 2266399 main.go:141] libmachine: Using SSH client type: native
	I0911 12:30:48.837095 2266399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.82 22 <nil> <nil>}
	I0911 12:30:48.837114 2266399 main.go:141] libmachine: About to run SSH command:
	sudo hostname enable-default-cni-640433 && echo "enable-default-cni-640433" | sudo tee /etc/hostname
	I0911 12:30:48.994481 2266399 main.go:141] libmachine: SSH cmd err, output: <nil>: enable-default-cni-640433
	
	I0911 12:30:48.994515 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHHostname
	I0911 12:30:48.997985 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:48.998428 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:48.998488 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:48.998685 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHPort
	I0911 12:30:48.998905 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:48.999100 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:48.999329 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHUsername
	I0911 12:30:48.999571 2266399 main.go:141] libmachine: Using SSH client type: native
	I0911 12:30:49.000221 2266399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.82 22 <nil> <nil>}
	I0911 12:30:49.000265 2266399 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\senable-default-cni-640433' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 enable-default-cni-640433/g' /etc/hosts;
				else 
					echo '127.0.1.1 enable-default-cni-640433' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:30:49.151700 2266399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:30:49.151737 2266399 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:30:49.151798 2266399 buildroot.go:174] setting up certificates
	I0911 12:30:49.151813 2266399 provision.go:83] configureAuth start
	I0911 12:30:49.151826 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetMachineName
	I0911 12:30:49.152155 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetIP
	I0911 12:30:49.155548 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:49.155974 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:49.156006 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:49.156210 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHHostname
	I0911 12:30:49.159090 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:49.159475 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:49.159512 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:49.159665 2266399 provision.go:138] copyHostCerts
	I0911 12:30:49.159756 2266399 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:30:49.159771 2266399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:30:49.159868 2266399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:30:49.160002 2266399 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:30:49.160016 2266399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:30:49.160057 2266399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:30:49.160157 2266399 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:30:49.160169 2266399 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:30:49.160203 2266399 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:30:49.160304 2266399 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.enable-default-cni-640433 san=[192.168.50.82 192.168.50.82 localhost 127.0.0.1 minikube enable-default-cni-640433]
	I0911 12:30:49.301962 2266399 provision.go:172] copyRemoteCerts
	I0911 12:30:49.302023 2266399 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:30:49.302053 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHHostname
	I0911 12:30:49.305347 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:49.305702 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:49.305736 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:49.305886 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHPort
	I0911 12:30:49.306126 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:49.306358 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHUsername
	I0911 12:30:49.306524 2266399 sshutil.go:53] new ssh client: &{IP:192.168.50.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433/id_rsa Username:docker}
	I0911 12:30:49.409728 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:30:49.439615 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0911 12:30:49.466406 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 12:30:49.494225 2266399 provision.go:86] duration metric: configureAuth took 342.38999ms
	I0911 12:30:49.494263 2266399 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:30:49.494539 2266399 config.go:182] Loaded profile config "enable-default-cni-640433": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:30:49.494717 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHHostname
	I0911 12:30:49.497968 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:49.498380 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:49.498430 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:49.498597 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHPort
	I0911 12:30:49.498857 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:49.499081 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:49.499225 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHUsername
	I0911 12:30:49.499403 2266399 main.go:141] libmachine: Using SSH client type: native
	I0911 12:30:49.499883 2266399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.82 22 <nil> <nil>}
	I0911 12:30:49.499914 2266399 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:30:49.867254 2266399 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:30:49.867294 2266399 main.go:141] libmachine: Checking connection to Docker...
	I0911 12:30:49.867308 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetURL
	I0911 12:30:49.868776 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | Using libvirt version 6000000
	I0911 12:30:49.871491 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:49.871940 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:49.871976 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:49.872100 2266399 main.go:141] libmachine: Docker is up and running!
	I0911 12:30:49.872118 2266399 main.go:141] libmachine: Reticulating splines...
	I0911 12:30:49.872127 2266399 client.go:171] LocalClient.Create took 25.742693746s
	I0911 12:30:49.872158 2266399 start.go:167] duration metric: libmachine.API.Create for "enable-default-cni-640433" took 25.742772589s
	I0911 12:30:49.872185 2266399 start.go:300] post-start starting for "enable-default-cni-640433" (driver="kvm2")
	I0911 12:30:49.872201 2266399 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:30:49.872223 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .DriverName
	I0911 12:30:49.872506 2266399 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:30:49.872538 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHHostname
	I0911 12:30:49.875527 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:49.875951 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:49.875986 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:49.876150 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHPort
	I0911 12:30:49.876378 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:49.876565 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHUsername
	I0911 12:30:49.876740 2266399 sshutil.go:53] new ssh client: &{IP:192.168.50.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433/id_rsa Username:docker}
	I0911 12:30:49.977436 2266399 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:30:49.982120 2266399 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:30:49.982149 2266399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:30:49.982252 2266399 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:30:49.982377 2266399 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:30:49.982538 2266399 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:30:49.994837 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:30:50.022691 2266399 start.go:303] post-start completed in 150.482661ms
	I0911 12:30:50.022763 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetConfigRaw
	I0911 12:30:50.023497 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetIP
	I0911 12:30:50.026454 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:50.026891 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:50.026927 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:50.027387 2266399 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/config.json ...
	I0911 12:30:50.027615 2266399 start.go:128] duration metric: createHost completed in 25.919975781s
	I0911 12:30:50.027647 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHHostname
	I0911 12:30:50.030432 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:50.030846 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:50.030920 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:50.031064 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHPort
	I0911 12:30:50.031275 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:50.031456 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:50.031642 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHUsername
	I0911 12:30:50.031907 2266399 main.go:141] libmachine: Using SSH client type: native
	I0911 12:30:50.032534 2266399 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.50.82 22 <nil> <nil>}
	I0911 12:30:50.032558 2266399 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:30:50.178376 2266399 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694435450.153143152
	
	I0911 12:30:50.178409 2266399 fix.go:206] guest clock: 1694435450.153143152
	I0911 12:30:50.178419 2266399 fix.go:219] Guest: 2023-09-11 12:30:50.153143152 +0000 UTC Remote: 2023-09-11 12:30:50.027629984 +0000 UTC m=+26.045218502 (delta=125.513168ms)
	I0911 12:30:50.178447 2266399 fix.go:190] guest clock delta is within tolerance: 125.513168ms
	I0911 12:30:50.178455 2266399 start.go:83] releasing machines lock for "enable-default-cni-640433", held for 26.070954874s
	I0911 12:30:50.178482 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .DriverName
	I0911 12:30:50.178864 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetIP
	I0911 12:30:50.182187 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:50.182722 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:50.182749 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:50.182955 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .DriverName
	I0911 12:30:50.183636 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .DriverName
	I0911 12:30:50.183887 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .DriverName
	I0911 12:30:50.184027 2266399 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:30:50.184085 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHHostname
	I0911 12:30:50.184179 2266399 ssh_runner.go:195] Run: cat /version.json
	I0911 12:30:50.184210 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHHostname
	I0911 12:30:50.187482 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:50.187869 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:50.187901 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:50.187930 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:50.188096 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHPort
	I0911 12:30:50.188321 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:50.188336 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:50.188369 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:50.188487 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHUsername
	I0911 12:30:50.188574 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHPort
	I0911 12:30:50.188651 2266399 sshutil.go:53] new ssh client: &{IP:192.168.50.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433/id_rsa Username:docker}
	I0911 12:30:50.188750 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHKeyPath
	I0911 12:30:50.188939 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetSSHUsername
	I0911 12:30:50.189099 2266399 sshutil.go:53] new ssh client: &{IP:192.168.50.82 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/enable-default-cni-640433/id_rsa Username:docker}
	I0911 12:30:50.282852 2266399 ssh_runner.go:195] Run: systemctl --version
	I0911 12:30:50.310900 2266399 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:30:50.471191 2266399 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:30:50.478425 2266399 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:30:50.478514 2266399 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:30:50.495306 2266399 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:30:50.495337 2266399 start.go:466] detecting cgroup driver to use...
	I0911 12:30:50.495497 2266399 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:30:50.514132 2266399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:30:50.527786 2266399 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:30:50.527871 2266399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:30:50.543801 2266399 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:30:50.558792 2266399 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:30:50.730855 2266399 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:30:50.891946 2266399 docker.go:212] disabling docker service ...
	I0911 12:30:50.892033 2266399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:30:50.907401 2266399 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:30:50.923164 2266399 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:30:51.079878 2266399 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:30:51.222957 2266399 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:30:51.239032 2266399 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:30:51.260291 2266399 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:30:51.260370 2266399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:30:51.271732 2266399 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:30:51.271798 2266399 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:30:51.287170 2266399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:30:51.301138 2266399 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:30:51.316019 2266399 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:30:51.328690 2266399 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:30:51.343147 2266399 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:30:51.343218 2266399 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:30:51.359009 2266399 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:30:51.370427 2266399 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:30:51.513412 2266399 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:30:51.711594 2266399 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:30:51.711683 2266399 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:30:51.720489 2266399 start.go:534] Will wait 60s for crictl version
	I0911 12:30:51.720559 2266399 ssh_runner.go:195] Run: which crictl
	I0911 12:30:51.725141 2266399 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:30:51.763517 2266399 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:30:51.763651 2266399 ssh_runner.go:195] Run: crio --version
	I0911 12:30:51.814736 2266399 ssh_runner.go:195] Run: crio --version
	I0911 12:30:51.881115 2266399 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:30:49.162704 2264591 node_ready.go:58] node "custom-flannel-640433" has status "Ready":"False"
	I0911 12:30:50.729472 2264591 node_ready.go:49] node "custom-flannel-640433" has status "Ready":"True"
	I0911 12:30:50.729497 2264591 node_ready.go:38] duration metric: took 6.084000599s waiting for node "custom-flannel-640433" to be "Ready" ...
	I0911 12:30:50.729515 2264591 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:30:50.786275 2264591 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-w69wr" in "kube-system" namespace to be "Ready" ...
	I0911 12:30:51.882734 2266399 main.go:141] libmachine: (enable-default-cni-640433) Calling .GetIP
	I0911 12:30:51.886059 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:51.886549 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:b5:81:bd", ip: ""} in network mk-enable-default-cni-640433: {Iface:virbr4 ExpiryTime:2023-09-11 13:30:42 +0000 UTC Type:0 Mac:52:54:00:b5:81:bd Iaid: IPaddr:192.168.50.82 Prefix:24 Hostname:enable-default-cni-640433 Clientid:01:52:54:00:b5:81:bd}
	I0911 12:30:51.886587 2266399 main.go:141] libmachine: (enable-default-cni-640433) DBG | domain enable-default-cni-640433 has defined IP address 192.168.50.82 and MAC address 52:54:00:b5:81:bd in network mk-enable-default-cni-640433
	I0911 12:30:51.886944 2266399 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0911 12:30:51.892119 2266399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:30:51.905407 2266399 localpath.go:92] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/client.crt -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/client.crt
	I0911 12:30:51.905560 2266399 localpath.go:117] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/client.key -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/client.key
	I0911 12:30:51.905684 2266399 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:30:51.905729 2266399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:30:51.937400 2266399 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:30:51.937493 2266399 ssh_runner.go:195] Run: which lz4
	I0911 12:30:51.941961 2266399 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:30:51.947098 2266399 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:30:51.947145 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:30:54.003523 2266399 crio.go:444] Took 2.061603 seconds to copy over tarball
	I0911 12:30:54.003628 2266399 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:30:52.848493 2264591 pod_ready.go:102] pod "coredns-5dd5756b68-w69wr" in "kube-system" namespace has status "Ready":"False"
	I0911 12:30:55.347828 2264591 pod_ready.go:102] pod "coredns-5dd5756b68-w69wr" in "kube-system" namespace has status "Ready":"False"
	I0911 12:30:58.212661 2266399 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (4.2089802s)
	I0911 12:30:58.212718 2266399 crio.go:451] Took 4.209140 seconds to extract the tarball
	I0911 12:30:58.212736 2266399 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:30:58.256027 2266399 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:30:58.325854 2266399 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:30:58.325886 2266399 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:30:58.325998 2266399 ssh_runner.go:195] Run: crio config
	I0911 12:30:58.398090 2266399 cni.go:84] Creating CNI manager for "bridge"
	I0911 12:30:58.398129 2266399 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16
	I0911 12:30:58.398155 2266399 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.82 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-640433 NodeName:enable-default-cni-640433 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.82"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.82 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:30:58.398355 2266399 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.82
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-640433"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.82
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.82"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:30:58.398460 2266399 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=enable-default-cni-640433 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.82
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:enable-default-cni-640433 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:}
	I0911 12:30:58.398548 2266399 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:30:58.408942 2266399 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:30:58.409042 2266399 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:30:58.418940 2266399 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (384 bytes)
	I0911 12:30:58.438894 2266399 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:30:58.458018 2266399 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2109 bytes)
	I0911 12:30:58.476697 2266399 ssh_runner.go:195] Run: grep 192.168.50.82	control-plane.minikube.internal$ /etc/hosts
	I0911 12:30:58.481057 2266399 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.82	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:30:58.495965 2266399 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433 for IP: 192.168.50.82
	I0911 12:30:58.496017 2266399 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:30:58.496204 2266399 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:30:58.496269 2266399 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:30:58.496399 2266399 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/client.key
	I0911 12:30:58.496434 2266399 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/apiserver.key.b644c25e
	I0911 12:30:58.496451 2266399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/apiserver.crt.b644c25e with IP's: [192.168.50.82 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 12:30:58.707756 2266399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/apiserver.crt.b644c25e ...
	I0911 12:30:58.707795 2266399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/apiserver.crt.b644c25e: {Name:mkc542052172259fbe539ececd785c2ee3d847ca Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:30:58.708016 2266399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/apiserver.key.b644c25e ...
	I0911 12:30:58.708032 2266399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/apiserver.key.b644c25e: {Name:mk15edaa2397ff17e8c8b830f3c90e13583059fa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:30:58.708136 2266399 certs.go:337] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/apiserver.crt.b644c25e -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/apiserver.crt
	I0911 12:30:58.708224 2266399 certs.go:341] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/apiserver.key.b644c25e -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/apiserver.key
	I0911 12:30:58.708299 2266399 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/proxy-client.key
	I0911 12:30:58.708322 2266399 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/proxy-client.crt with IP's: []
	I0911 12:30:58.829903 2266399 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/proxy-client.crt ...
	I0911 12:30:58.829940 2266399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/proxy-client.crt: {Name:mkb0e42e0780f6044d886a1af57f35352079ab69 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:30:58.830152 2266399 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/proxy-client.key ...
	I0911 12:30:58.830169 2266399 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/proxy-client.key: {Name:mk2edd1026a079dc7d6bb94dc843aa2ff9046810 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:30:58.830409 2266399 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:30:58.830464 2266399 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:30:58.830482 2266399 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:30:58.830516 2266399 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:30:58.830559 2266399 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:30:58.830618 2266399 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:30:58.830683 2266399 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:30:58.831479 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:30:58.862257 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0911 12:30:58.889368 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:30:58.915068 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/enable-default-cni-640433/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:30:58.941353 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:30:58.968114 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:30:58.996068 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:30:59.024416 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:30:57.844544 2264591 pod_ready.go:102] pod "coredns-5dd5756b68-w69wr" in "kube-system" namespace has status "Ready":"False"
	I0911 12:30:59.849880 2264591 pod_ready.go:102] pod "coredns-5dd5756b68-w69wr" in "kube-system" namespace has status "Ready":"False"
	I0911 12:30:59.052264 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:30:59.080032 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:30:59.106005 2266399 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:30:59.131254 2266399 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:30:59.151610 2266399 ssh_runner.go:195] Run: openssl version
	I0911 12:30:59.158798 2266399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:30:59.172386 2266399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:30:59.178565 2266399 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:30:59.178649 2266399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:30:59.186325 2266399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:30:59.199970 2266399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:30:59.213145 2266399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:30:59.218784 2266399 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:30:59.218880 2266399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:30:59.226648 2266399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:30:59.240713 2266399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:30:59.254344 2266399 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:30:59.261330 2266399 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:30:59.261416 2266399 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:30:59.268680 2266399 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:30:59.281443 2266399 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:30:59.287003 2266399 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 12:30:59.287074 2266399 kubeadm.go:404] StartCluster: {Name:enable-default-cni-640433 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:3072 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.28.1 ClusterName:enable-default-cni-640433 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.50.82 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: Mo
untMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:30:59.287178 2266399 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:30:59.287255 2266399 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:30:59.329530 2266399 cri.go:89] found id: ""
	I0911 12:30:59.329635 2266399 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:30:59.342738 2266399 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:30:59.354048 2266399 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:30:59.367338 2266399 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:30:59.367384 2266399 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:30:59.582326 2266399 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0911 12:31:02.343440 2264591 pod_ready.go:102] pod "coredns-5dd5756b68-w69wr" in "kube-system" namespace has status "Ready":"False"
	I0911 12:31:04.847478 2264591 pod_ready.go:102] pod "coredns-5dd5756b68-w69wr" in "kube-system" namespace has status "Ready":"False"
	I0911 12:31:05.881267 2264591 pod_ready.go:92] pod "coredns-5dd5756b68-w69wr" in "kube-system" namespace has status "Ready":"True"
	I0911 12:31:05.881304 2264591 pod_ready.go:81] duration metric: took 15.094933078s waiting for pod "coredns-5dd5756b68-w69wr" in "kube-system" namespace to be "Ready" ...
	I0911 12:31:05.881324 2264591 pod_ready.go:78] waiting up to 15m0s for pod "etcd-custom-flannel-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:31:05.891203 2264591 pod_ready.go:92] pod "etcd-custom-flannel-640433" in "kube-system" namespace has status "Ready":"True"
	I0911 12:31:05.891234 2264591 pod_ready.go:81] duration metric: took 9.902053ms waiting for pod "etcd-custom-flannel-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:31:05.891248 2264591 pod_ready.go:78] waiting up to 15m0s for pod "kube-apiserver-custom-flannel-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:31:05.901701 2264591 pod_ready.go:92] pod "kube-apiserver-custom-flannel-640433" in "kube-system" namespace has status "Ready":"True"
	I0911 12:31:05.901744 2264591 pod_ready.go:81] duration metric: took 10.485918ms waiting for pod "kube-apiserver-custom-flannel-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:31:05.901762 2264591 pod_ready.go:78] waiting up to 15m0s for pod "kube-controller-manager-custom-flannel-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:31:05.909324 2264591 pod_ready.go:92] pod "kube-controller-manager-custom-flannel-640433" in "kube-system" namespace has status "Ready":"True"
	I0911 12:31:05.909348 2264591 pod_ready.go:81] duration metric: took 7.577619ms waiting for pod "kube-controller-manager-custom-flannel-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:31:05.909364 2264591 pod_ready.go:78] waiting up to 15m0s for pod "kube-proxy-chdsz" in "kube-system" namespace to be "Ready" ...
	I0911 12:31:05.918125 2264591 pod_ready.go:92] pod "kube-proxy-chdsz" in "kube-system" namespace has status "Ready":"True"
	I0911 12:31:05.918157 2264591 pod_ready.go:81] duration metric: took 8.785373ms waiting for pod "kube-proxy-chdsz" in "kube-system" namespace to be "Ready" ...
	I0911 12:31:05.918172 2264591 pod_ready.go:78] waiting up to 15m0s for pod "kube-scheduler-custom-flannel-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:31:06.239119 2264591 pod_ready.go:92] pod "kube-scheduler-custom-flannel-640433" in "kube-system" namespace has status "Ready":"True"
	I0911 12:31:06.239155 2264591 pod_ready.go:81] duration metric: took 320.973808ms waiting for pod "kube-scheduler-custom-flannel-640433" in "kube-system" namespace to be "Ready" ...
	I0911 12:31:06.239174 2264591 pod_ready.go:38] duration metric: took 15.509643431s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0911 12:31:06.239202 2264591 api_server.go:52] waiting for apiserver process to appear ...
	I0911 12:31:06.239282 2264591 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 12:31:06.260170 2264591 api_server.go:72] duration metric: took 21.784566484s to wait for apiserver process to appear ...
	I0911 12:31:06.260204 2264591 api_server.go:88] waiting for apiserver healthz status ...
	I0911 12:31:06.260231 2264591 api_server.go:253] Checking apiserver healthz at https://192.168.72.232:8443/healthz ...
	I0911 12:31:06.269166 2264591 api_server.go:279] https://192.168.72.232:8443/healthz returned 200:
	ok
	I0911 12:31:06.273698 2264591 api_server.go:141] control plane version: v1.28.1
	I0911 12:31:06.273732 2264591 api_server.go:131] duration metric: took 13.520629ms to wait for apiserver health ...
	I0911 12:31:06.273748 2264591 system_pods.go:43] waiting for kube-system pods to appear ...
	I0911 12:31:06.445195 2264591 system_pods.go:59] 7 kube-system pods found
	I0911 12:31:06.445244 2264591 system_pods.go:61] "coredns-5dd5756b68-w69wr" [fa3e5125-e331-4745-9871-1236afd14933] Running
	I0911 12:31:06.445252 2264591 system_pods.go:61] "etcd-custom-flannel-640433" [820790cd-b893-4313-a736-7679e8d99822] Running
	I0911 12:31:06.445259 2264591 system_pods.go:61] "kube-apiserver-custom-flannel-640433" [8b643ed0-78b3-4b45-98f3-7f0fb3f8921b] Running
	I0911 12:31:06.445268 2264591 system_pods.go:61] "kube-controller-manager-custom-flannel-640433" [c7f4aecf-9af0-45fa-a680-3b53059d6489] Running
	I0911 12:31:06.445276 2264591 system_pods.go:61] "kube-proxy-chdsz" [06fe1408-20b1-44ba-a289-457f66c158eb] Running
	I0911 12:31:06.445282 2264591 system_pods.go:61] "kube-scheduler-custom-flannel-640433" [9757e2ed-94c0-4719-98e8-dc9abc3e7e84] Running
	I0911 12:31:06.445287 2264591 system_pods.go:61] "storage-provisioner" [2b143d54-80fa-4907-ab21-cd4736dc6f00] Running
	I0911 12:31:06.445297 2264591 system_pods.go:74] duration metric: took 171.541706ms to wait for pod list to return data ...
	I0911 12:31:06.445308 2264591 default_sa.go:34] waiting for default service account to be created ...
	I0911 12:31:06.639075 2264591 default_sa.go:45] found service account: "default"
	I0911 12:31:06.639110 2264591 default_sa.go:55] duration metric: took 193.791866ms for default service account to be created ...
	I0911 12:31:06.639120 2264591 system_pods.go:116] waiting for k8s-apps to be running ...
	I0911 12:31:06.844752 2264591 system_pods.go:86] 7 kube-system pods found
	I0911 12:31:06.844787 2264591 system_pods.go:89] "coredns-5dd5756b68-w69wr" [fa3e5125-e331-4745-9871-1236afd14933] Running
	I0911 12:31:06.844794 2264591 system_pods.go:89] "etcd-custom-flannel-640433" [820790cd-b893-4313-a736-7679e8d99822] Running
	I0911 12:31:06.844800 2264591 system_pods.go:89] "kube-apiserver-custom-flannel-640433" [8b643ed0-78b3-4b45-98f3-7f0fb3f8921b] Running
	I0911 12:31:06.844804 2264591 system_pods.go:89] "kube-controller-manager-custom-flannel-640433" [c7f4aecf-9af0-45fa-a680-3b53059d6489] Running
	I0911 12:31:06.844808 2264591 system_pods.go:89] "kube-proxy-chdsz" [06fe1408-20b1-44ba-a289-457f66c158eb] Running
	I0911 12:31:06.844836 2264591 system_pods.go:89] "kube-scheduler-custom-flannel-640433" [9757e2ed-94c0-4719-98e8-dc9abc3e7e84] Running
	I0911 12:31:06.844843 2264591 system_pods.go:89] "storage-provisioner" [2b143d54-80fa-4907-ab21-cd4736dc6f00] Running
	I0911 12:31:06.844853 2264591 system_pods.go:126] duration metric: took 205.726419ms to wait for k8s-apps to be running ...
	I0911 12:31:06.844862 2264591 system_svc.go:44] waiting for kubelet service to be running ....
	I0911 12:31:06.844917 2264591 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 12:31:06.868951 2264591 system_svc.go:56] duration metric: took 24.063728ms WaitForService to wait for kubelet.
	I0911 12:31:06.868986 2264591 kubeadm.go:581] duration metric: took 22.393392876s to wait for : map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] ...
	I0911 12:31:06.869008 2264591 node_conditions.go:102] verifying NodePressure condition ...
	I0911 12:31:07.042142 2264591 node_conditions.go:122] node storage ephemeral capacity is 17784752Ki
	I0911 12:31:07.042185 2264591 node_conditions.go:123] node cpu capacity is 2
	I0911 12:31:07.042202 2264591 node_conditions.go:105] duration metric: took 173.188713ms to run NodePressure ...
	I0911 12:31:07.042222 2264591 start.go:228] waiting for startup goroutines ...
	I0911 12:31:07.042231 2264591 start.go:233] waiting for cluster config update ...
	I0911 12:31:07.042246 2264591 start.go:242] writing updated cluster config ...
	I0911 12:31:07.042580 2264591 ssh_runner.go:195] Run: rm -f paused
	I0911 12:31:07.100698 2264591 start.go:600] kubectl: 1.28.1, cluster: 1.28.1 (minor skew: 0)
	I0911 12:31:07.103864 2264591 out.go:177] * Done! kubectl is now configured to use "custom-flannel-640433" cluster and "default" namespace by default
	I0911 12:31:12.830366 2266399 kubeadm.go:322] [init] Using Kubernetes version: v1.28.1
	I0911 12:31:12.830440 2266399 kubeadm.go:322] [preflight] Running pre-flight checks
	I0911 12:31:12.830515 2266399 kubeadm.go:322] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0911 12:31:12.830592 2266399 kubeadm.go:322] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0911 12:31:12.830678 2266399 kubeadm.go:322] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0911 12:31:12.830729 2266399 kubeadm.go:322] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0911 12:31:12.833014 2266399 out.go:204]   - Generating certificates and keys ...
	I0911 12:31:12.833133 2266399 kubeadm.go:322] [certs] Using existing ca certificate authority
	I0911 12:31:12.833237 2266399 kubeadm.go:322] [certs] Using existing apiserver certificate and key on disk
	I0911 12:31:12.833353 2266399 kubeadm.go:322] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0911 12:31:12.833439 2266399 kubeadm.go:322] [certs] Generating "front-proxy-ca" certificate and key
	I0911 12:31:12.833516 2266399 kubeadm.go:322] [certs] Generating "front-proxy-client" certificate and key
	I0911 12:31:12.833581 2266399 kubeadm.go:322] [certs] Generating "etcd/ca" certificate and key
	I0911 12:31:12.833673 2266399 kubeadm.go:322] [certs] Generating "etcd/server" certificate and key
	I0911 12:31:12.833841 2266399 kubeadm.go:322] [certs] etcd/server serving cert is signed for DNS names [enable-default-cni-640433 localhost] and IPs [192.168.50.82 127.0.0.1 ::1]
	I0911 12:31:12.833911 2266399 kubeadm.go:322] [certs] Generating "etcd/peer" certificate and key
	I0911 12:31:12.834088 2266399 kubeadm.go:322] [certs] etcd/peer serving cert is signed for DNS names [enable-default-cni-640433 localhost] and IPs [192.168.50.82 127.0.0.1 ::1]
	I0911 12:31:12.834175 2266399 kubeadm.go:322] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0911 12:31:12.834270 2266399 kubeadm.go:322] [certs] Generating "apiserver-etcd-client" certificate and key
	I0911 12:31:12.834333 2266399 kubeadm.go:322] [certs] Generating "sa" key and public key
	I0911 12:31:12.834405 2266399 kubeadm.go:322] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0911 12:31:12.834474 2266399 kubeadm.go:322] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0911 12:31:12.834544 2266399 kubeadm.go:322] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0911 12:31:12.834645 2266399 kubeadm.go:322] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0911 12:31:12.834723 2266399 kubeadm.go:322] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0911 12:31:12.834824 2266399 kubeadm.go:322] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0911 12:31:12.834913 2266399 kubeadm.go:322] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0911 12:31:12.836588 2266399 out.go:204]   - Booting up control plane ...
	I0911 12:31:12.836699 2266399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0911 12:31:12.836806 2266399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0911 12:31:12.836905 2266399 kubeadm.go:322] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0911 12:31:12.837016 2266399 kubeadm.go:322] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0911 12:31:12.837112 2266399 kubeadm.go:322] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0911 12:31:12.837155 2266399 kubeadm.go:322] [kubelet-start] Starting the kubelet
	I0911 12:31:12.837300 2266399 kubeadm.go:322] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0911 12:31:12.837373 2266399 kubeadm.go:322] [apiclient] All control plane components are healthy after 9.006309 seconds
	I0911 12:31:12.837471 2266399 kubeadm.go:322] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0911 12:31:12.837588 2266399 kubeadm.go:322] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0911 12:31:12.837645 2266399 kubeadm.go:322] [upload-certs] Skipping phase. Please see --upload-certs
	I0911 12:31:12.837879 2266399 kubeadm.go:322] [mark-control-plane] Marking the node enable-default-cni-640433 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0911 12:31:12.837954 2266399 kubeadm.go:322] [bootstrap-token] Using token: 15z0wj.xxslnxwelax45pzv
	I0911 12:31:12.839927 2266399 out.go:204]   - Configuring RBAC rules ...
	I0911 12:31:12.840059 2266399 kubeadm.go:322] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0911 12:31:12.840157 2266399 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0911 12:31:12.840336 2266399 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0911 12:31:12.840486 2266399 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0911 12:31:12.840613 2266399 kubeadm.go:322] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0911 12:31:12.840714 2266399 kubeadm.go:322] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0911 12:31:12.840887 2266399 kubeadm.go:322] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0911 12:31:12.840969 2266399 kubeadm.go:322] [addons] Applied essential addon: CoreDNS
	I0911 12:31:12.841030 2266399 kubeadm.go:322] [addons] Applied essential addon: kube-proxy
	I0911 12:31:12.841036 2266399 kubeadm.go:322] 
	I0911 12:31:12.841116 2266399 kubeadm.go:322] Your Kubernetes control-plane has initialized successfully!
	I0911 12:31:12.841121 2266399 kubeadm.go:322] 
	I0911 12:31:12.841214 2266399 kubeadm.go:322] To start using your cluster, you need to run the following as a regular user:
	I0911 12:31:12.841231 2266399 kubeadm.go:322] 
	I0911 12:31:12.841292 2266399 kubeadm.go:322]   mkdir -p $HOME/.kube
	I0911 12:31:12.841390 2266399 kubeadm.go:322]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0911 12:31:12.841436 2266399 kubeadm.go:322]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0911 12:31:12.841448 2266399 kubeadm.go:322] 
	I0911 12:31:12.841525 2266399 kubeadm.go:322] Alternatively, if you are the root user, you can run:
	I0911 12:31:12.841531 2266399 kubeadm.go:322] 
	I0911 12:31:12.841572 2266399 kubeadm.go:322]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0911 12:31:12.841576 2266399 kubeadm.go:322] 
	I0911 12:31:12.841617 2266399 kubeadm.go:322] You should now deploy a pod network to the cluster.
	I0911 12:31:12.841679 2266399 kubeadm.go:322] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0911 12:31:12.841749 2266399 kubeadm.go:322]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0911 12:31:12.841755 2266399 kubeadm.go:322] 
	I0911 12:31:12.841841 2266399 kubeadm.go:322] You can now join any number of control-plane nodes by copying certificate authorities
	I0911 12:31:12.841968 2266399 kubeadm.go:322] and service account keys on each node and then running the following as root:
	I0911 12:31:12.841983 2266399 kubeadm.go:322] 
	I0911 12:31:12.842080 2266399 kubeadm.go:322]   kubeadm join control-plane.minikube.internal:8443 --token 15z0wj.xxslnxwelax45pzv \
	I0911 12:31:12.842207 2266399 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed \
	I0911 12:31:12.842233 2266399 kubeadm.go:322] 	--control-plane 
	I0911 12:31:12.842238 2266399 kubeadm.go:322] 
	I0911 12:31:12.842342 2266399 kubeadm.go:322] Then you can join any number of worker nodes by running the following on each as root:
	I0911 12:31:12.842348 2266399 kubeadm.go:322] 
	I0911 12:31:12.842446 2266399 kubeadm.go:322] kubeadm join control-plane.minikube.internal:8443 --token 15z0wj.xxslnxwelax45pzv \
	I0911 12:31:12.842581 2266399 kubeadm.go:322] 	--discovery-token-ca-cert-hash sha256:64468b34da2ef264c43c7b3804bdc69112168b168467589741cf8a03828b24ed 
	I0911 12:31:12.842592 2266399 cni.go:84] Creating CNI manager for "bridge"
	I0911 12:31:12.844417 2266399 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0911 12:31:12.846272 2266399 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0911 12:31:12.872841 2266399 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0911 12:31:12.946316 2266399 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0911 12:31:12.946342 2266399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:31:12.946416 2266399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl label nodes minikube.k8s.io/version=v1.31.2 minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9 minikube.k8s.io/name=enable-default-cni-640433 minikube.k8s.io/updated_at=2023_09_11T12_31_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:31:13.363852 2266399 ops.go:34] apiserver oom_adj: -16
	I0911 12:31:13.364017 2266399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0911 12:31:13.528145 2266399 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 12:08:09 UTC, ends at Mon 2023-09-11 12:31:16 UTC. --
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.310826519Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=fac87f55-bfbd-41d1-89f1-97b0eea4eadc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.311201157Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=fac87f55-bfbd-41d1-89f1-97b0eea4eadc name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.353091220Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=bbdbc8db-aa79-4db4-b6d8-52ddcea55efa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.353184391Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=bbdbc8db-aa79-4db4-b6d8-52ddcea55efa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.353511712Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=bbdbc8db-aa79-4db4-b6d8-52ddcea55efa name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.394528972Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6015d134-74d7-436f-8992-8c8fabfebcdd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.394655574Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6015d134-74d7-436f-8992-8c8fabfebcdd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.394898975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6015d134-74d7-436f-8992-8c8fabfebcdd name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.439383347Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=01739efd-cb66-4c11-abaf-1dad099c4153 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.439486929Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=01739efd-cb66-4c11-abaf-1dad099c4153 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.439719408Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=01739efd-cb66-4c11-abaf-1dad099c4153 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.483655278Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4bd5aee8-f6d9-4ae0-a04e-377bc94dbee0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.483763253Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4bd5aee8-f6d9-4ae0-a04e-377bc94dbee0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.484245271Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4bd5aee8-f6d9-4ae0-a04e-377bc94dbee0 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.526398560Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=cbaeb8fe-4d83-44a1-b711-790528d9670b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.526545336Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=cbaeb8fe-4d83-44a1-b711-790528d9670b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.526864180Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=cbaeb8fe-4d83-44a1-b711-790528d9670b name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.565202137Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="go-grpc-middleware/chain.go:25" id=eb014950-cd8e-4cbd-889c-dbae11b313df name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.565576830Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&PodSandboxMetadata{Name:busybox,Uid:1a6325f3-c610-437f-a81f-36da95fc4ebf,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434129871178438,Labels:map[string]string{integration-test: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:08:45.938426303Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-xszs4,Uid:e58151f1-7503-49df-b847-67ac70d0ef74,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:169443
4129845079500,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:08:45.938427642Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:b81b2c3cec4acba0a2b49eccf1ea3bf0972e3301d4c2b63fe9f9d1c983d3151a,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-tw6td,Uid:37d0a828-9243-4359-be39-1c2099835e45,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434128262777730,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-tw6td,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 37d0a828-9243-4359-be39-1c2099835e45,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11
T12:08:45.938424481Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&PodSandboxMetadata{Name:kube-proxy-ldgjr,Uid:34e5049f-8cba-49bf-96af-f5e0338e4aa5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434126310003472,Labels:map[string]string{controller-revision-hash: 5d69f4f5b5,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 34e5049f-8cba-49bf-96af-f5e0338e4aa5,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2023-09-11T12:08:45.938421157Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:deb073a7-107f-419d-9b5e-16c7722b957d,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434126283435418,Labels:
map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,ku
bernetes.io/config.seen: 2023-09-11T12:08:45.938409906Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-484027,Uid:483e3b587026f25bcbe9b42b4b588cca,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434118495200712,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.230:8444,kubernetes.io/config.hash: 483e3b587026f25bcbe9b42b4b588cca,kubernetes.io/config.seen: 2023-09-11T12:08:37.925808333Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86
a3,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-484027,Uid:905d42441501c2e6979afd6df9e96a0e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434118487150546,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.230:2379,kubernetes.io/config.hash: 905d42441501c2e6979afd6df9e96a0e,kubernetes.io/config.seen: 2023-09-11T12:08:37.925807249Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-default-k8s-diff-port-484027,Uid:8f43ce84a1b0e0279a12b1137f2ed4cd,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434118483133068,Labels:map[s
tring]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8f43ce84a1b0e0279a12b1137f2ed4cd,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 8f43ce84a1b0e0279a12b1137f2ed4cd,kubernetes.io/config.seen: 2023-09-11T12:08:37.925802150Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-484027,Uid:415a16c4d6051dd25329d839e8bc8363,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1694434118479250864,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,tier: control-p
lane,},Annotations:map[string]string{kubernetes.io/config.hash: 415a16c4d6051dd25329d839e8bc8363,kubernetes.io/config.seen: 2023-09-11T12:08:37.925806130Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="go-grpc-middleware/chain.go:25" id=eb014950-cd8e-4cbd-889c-dbae11b313df name=/runtime.v1.RuntimeService/ListPodSandbox
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.566630576Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6b74c6cf-5944-47f5-9318-051e69760f07 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.566738338Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6b74c6cf-5944-47f5-9318-051e69760f07 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.567233118Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6b74c6cf-5944-47f5-9318-051e69760f07 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.571316242Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2e97c82c-0bba-40cb-a4dc-df2b1552af2c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.571408385Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2e97c82c-0bba-40cb-a4dc-df2b1552af2c name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:31:16 default-k8s-diff-port-484027 crio[716]: time="2023-09-11 12:31:16.571701268Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_RUNNING,CreatedAt:1694434158275497714,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: deb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount
: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f44e8458b48fa61dd73551f7d5bf96dddc6672a7115897b4ab2c5d7ef566e405,PodSandboxId:ef1063b26e24d381b22378175ca3775d9c1a343bf298659924e86a59ffc28778,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,State:CONTAINER_RUNNING,CreatedAt:1694434133171186424,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 1a6325f3-c610-437f-a81f-36da95fc4ebf,},Annotations:map[string]string{io.kubernetes.container.hash: 3e054ed2,io.kubernetes.container.restartCount: 1,io.kubernetes.
container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27,PodSandboxId:5079d932dd6dd4254afb303abfd742c2be8d245263000b56a1dad8a137d5d12d,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,State:CONTAINER_RUNNING,CreatedAt:1694434131407520612,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-xszs4,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e58151f1-7503-49df-b847-67ac70d0ef74,},Annotations:map[string]string{io.kubernetes.container.hash: 37bf14a0,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\
":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124,PodSandboxId:2904254cb089b05c4332d6330c6a4a63c69055544979d923e2ae260d51ef756c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3,State:CONTAINER_RUNNING,CreatedAt:1694434127268042261,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-ldgjr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
34e5049f-8cba-49bf-96af-f5e0338e4aa5,},Annotations:map[string]string{io.kubernetes.container.hash: 5d65b240,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329,PodSandboxId:951133aad1b4104aaa634d29936751fb6f484864032539a095a1dad0f18e7137,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944,State:CONTAINER_EXITED,CreatedAt:1694434127172201608,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d
eb073a7-107f-419d-9b5e-16c7722b957d,},Annotations:map[string]string{io.kubernetes.container.hash: 1755b0f2,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7,PodSandboxId:c686329de6a1f21ddd4535c14a7ba52454b914ec1d1b8a4fd21d7d79220c86a3,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15,State:CONTAINER_RUNNING,CreatedAt:1694434119848927499,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 905d42441501c2e6979afd6df9e96a0e,},An
notations:map[string]string{io.kubernetes.container.hash: f002fcc0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6,PodSandboxId:9e85532741745dff6a81d4e18a4ecbe76182901cdac3efd3f5858b08e396eab4,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4,State:CONTAINER_RUNNING,CreatedAt:1694434119534299238,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 415a16c4d6051dd25329d839e8bc8363,},An
notations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45,PodSandboxId:2ef2df8aa1112a9cf26b2c49b430bd2ce304103f688698622b1ebf88c343dbd5,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774,State:CONTAINER_RUNNING,CreatedAt:1694434119308024388,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 483e3b587026f25bcbe9b42b4b588cca,},An
notations:map[string]string{io.kubernetes.container.hash: 22541134,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6,PodSandboxId:020cab58e27011a97febaa4a927156508844ecb63606a4339f2f5490f684080b,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830,State:CONTAINER_RUNNING,CreatedAt:1694434119010808560,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-484027,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8
f43ce84a1b0e0279a12b1137f2ed4cd,},Annotations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2e97c82c-0bba-40cb-a4dc-df2b1552af2c name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID
	8cc82bfb8abe6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      21 minutes ago      Running             storage-provisioner       2                   951133aad1b41
	f44e8458b48fa       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   22 minutes ago      Running             busybox                   1                   ef1063b26e24d
	8e75cc646ed39       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      22 minutes ago      Running             coredns                   1                   5079d932dd6dd
	08777e80449f2       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5                                      22 minutes ago      Running             kube-proxy                1                   2904254cb089b
	f5464e92c81e8       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      22 minutes ago      Exited              storage-provisioner       1                   951133aad1b41
	153e729fe2650       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      22 minutes ago      Running             etcd                      1                   c686329de6a1f
	fc4e7b5d1258c       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a                                      22 minutes ago      Running             kube-scheduler            1                   9e85532741745
	07023f1836d74       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77                                      22 minutes ago      Running             kube-apiserver            1                   2ef2df8aa1112
	169c262446f69       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac                                      22 minutes ago      Running             kube-controller-manager   1                   020cab58e2701
	
	* 
	* ==> coredns [8e75cc646ed396cc9f1f408699f8dd144f3a131bf120ca42513a329089ee9e27] <==
	* .:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:36512 - 31348 "HINFO IN 324436852554161395.8800393712480390138. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.010621908s
	
	* 
	* ==> describe nodes <==
	* Name:               default-k8s-diff-port-484027
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-484027
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=default-k8s-diff-port-484027
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T12_02_00_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 12:01:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-484027
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 12:31:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 12:29:42 +0000   Mon, 11 Sep 2023 12:01:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 12:29:42 +0000   Mon, 11 Sep 2023 12:01:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 12:29:42 +0000   Mon, 11 Sep 2023 12:01:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 12:29:42 +0000   Mon, 11 Sep 2023 12:08:55 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.230
	  Hostname:    default-k8s-diff-port-484027
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 05a3bd74df704341a61f28302ea21153
	  System UUID:                05a3bd74-df70-4341-a61f-28302ea21153
	  Boot ID:                    264a2c7a-c929-45ee-9e5a-5cf8e0d6a579
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 coredns-5dd5756b68-xszs4                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     29m
	  kube-system                 etcd-default-k8s-diff-port-484027                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         29m
	  kube-system                 kube-apiserver-default-k8s-diff-port-484027             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-484027    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-proxy-ldgjr                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 kube-scheduler-default-k8s-diff-port-484027             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	  kube-system                 metrics-server-57f55c9bc5-tw6td                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         29m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 29m                kube-proxy       
	  Normal  Starting                 22m                kube-proxy       
	  Normal  NodeHasSufficientPID     29m (x7 over 29m)  kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  29m (x8 over 29m)  kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 29m                kubelet          Starting kubelet.
	  Normal  NodeReady                29m                kubelet          Node default-k8s-diff-port-484027 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    29m                kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     29m                kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  29m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  29m                kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           29m                node-controller  Node default-k8s-diff-port-484027 event: Registered Node default-k8s-diff-port-484027 in Controller
	  Normal  Starting                 22m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  22m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    22m (x8 over 22m)  kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     22m (x7 over 22m)  kubelet          Node default-k8s-diff-port-484027 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           22m                node-controller  Node default-k8s-diff-port-484027 event: Registered Node default-k8s-diff-port-484027 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep11 12:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000004] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.102554] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.992736] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.888880] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.155704] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000001] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.571362] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000016] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.899325] systemd-fstab-generator[642]: Ignoring "noauto" for root device
	[  +0.137666] systemd-fstab-generator[653]: Ignoring "noauto" for root device
	[  +0.178357] systemd-fstab-generator[666]: Ignoring "noauto" for root device
	[  +0.135622] systemd-fstab-generator[677]: Ignoring "noauto" for root device
	[  +0.286939] systemd-fstab-generator[701]: Ignoring "noauto" for root device
	[ +18.091757] systemd-fstab-generator[917]: Ignoring "noauto" for root device
	[ +20.584498] kauditd_printk_skb: 29 callbacks suppressed
	
	* 
	* ==> etcd [153e729fe2650d0a71ac9d98b6f0935494123a20c6c6e5c6e809a7ad2146cae7] <==
	* {"level":"info","ts":"2023-09-11T12:27:16.953427Z","caller":"traceutil/trace.go:171","msg":"trace[1202512919] transaction","detail":"{read_only:false; response_revision:1474; number_of_response:1; }","duration":"266.872217ms","start":"2023-09-11T12:27:16.686495Z","end":"2023-09-11T12:27:16.953367Z","steps":["trace[1202512919] 'process raft request'  (duration: 266.288768ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T12:27:17.339701Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"256.821817ms","expected-duration":"100ms","prefix":"","request":"header:<ID:7210978253532133942 username:\"kube-apiserver-etcd-client\" auth_revision:1 > lease_grant:<ttl:15-second id:64128a8424d97a35>","response":"size:40"}
	{"level":"warn","ts":"2023-09-11T12:27:17.339876Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T12:27:16.954495Z","time spent":"385.36595ms","remote":"127.0.0.1:55850","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"warn","ts":"2023-09-11T12:28:07.434717Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T12:28:06.878782Z","time spent":"555.925364ms","remote":"127.0.0.1:55850","response type":"/etcdserverpb.Lease/LeaseGrant","request count":-1,"request size":-1,"response count":-1,"response size":-1,"request content":""}
	{"level":"info","ts":"2023-09-11T12:28:07.559239Z","caller":"traceutil/trace.go:171","msg":"trace[2026046860] transaction","detail":"{read_only:false; response_revision:1514; number_of_response:1; }","duration":"123.349275ms","start":"2023-09-11T12:28:07.435856Z","end":"2023-09-11T12:28:07.559205Z","steps":["trace[2026046860] 'process raft request'  (duration: 123.24901ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T12:28:07.559516Z","caller":"traceutil/trace.go:171","msg":"trace[1967007014] linearizableReadLoop","detail":"{readStateIndex:1783; appliedIndex:1782; }","duration":"286.830038ms","start":"2023-09-11T12:28:07.272673Z","end":"2023-09-11T12:28:07.559504Z","steps":["trace[1967007014] 'read index received'  (duration: 162.113972ms)","trace[1967007014] 'applied index is now lower than readState.Index'  (duration: 124.714688ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-11T12:28:07.559273Z","caller":"traceutil/trace.go:171","msg":"trace[555232340] transaction","detail":"{read_only:false; response_revision:1513; number_of_response:1; }","duration":"551.477413ms","start":"2023-09-11T12:28:07.007783Z","end":"2023-09-11T12:28:07.559261Z","steps":["trace[555232340] 'process raft request'  (duration: 489.714595ms)","trace[555232340] 'compare'  (duration: 61.274518ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-11T12:28:07.55979Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"287.13984ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T12:28:07.56112Z","caller":"traceutil/trace.go:171","msg":"trace[544798644] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1514; }","duration":"288.463176ms","start":"2023-09-11T12:28:07.272642Z","end":"2023-09-11T12:28:07.561106Z","steps":["trace[544798644] 'agreement among raft nodes before linearized reading'  (duration: 287.114841ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T12:28:07.5598Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2023-09-11T12:28:07.007766Z","time spent":"551.852867ms","remote":"127.0.0.1:55902","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":692,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-pqz7nibjck2yfmakqjjiwog65e\" mod_revision:1505 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-pqz7nibjck2yfmakqjjiwog65e\" value_size:619 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-pqz7nibjck2yfmakqjjiwog65e\" > >"}
	{"level":"warn","ts":"2023-09-11T12:28:07.559866Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"239.995839ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1129"}
	{"level":"info","ts":"2023-09-11T12:28:07.561509Z","caller":"traceutil/trace.go:171","msg":"trace[1884723988] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:1514; }","duration":"241.632878ms","start":"2023-09-11T12:28:07.319863Z","end":"2023-09-11T12:28:07.561496Z","steps":["trace[1884723988] 'agreement among raft nodes before linearized reading'  (duration: 239.965635ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T12:28:07.559912Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"197.253042ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kubernetes-dashboard/\" range_end:\"/registry/pods/kubernetes-dashboard0\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T12:28:07.561696Z","caller":"traceutil/trace.go:171","msg":"trace[1940655104] range","detail":"{range_begin:/registry/pods/kubernetes-dashboard/; range_end:/registry/pods/kubernetes-dashboard0; response_count:0; response_revision:1514; }","duration":"199.031928ms","start":"2023-09-11T12:28:07.362654Z","end":"2023-09-11T12:28:07.561686Z","steps":["trace[1940655104] 'agreement among raft nodes before linearized reading'  (duration: 197.241799ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T12:28:34.038035Z","caller":"traceutil/trace.go:171","msg":"trace[1337142102] transaction","detail":"{read_only:false; response_revision:1535; number_of_response:1; }","duration":"234.89603ms","start":"2023-09-11T12:28:33.803116Z","end":"2023-09-11T12:28:34.038012Z","steps":["trace[1337142102] 'process raft request'  (duration: 234.644175ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T12:28:43.850831Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1302}
	{"level":"info","ts":"2023-09-11T12:28:43.853281Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1302,"took":"2.034383ms","hash":2304121300}
	{"level":"info","ts":"2023-09-11T12:28:43.853361Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2304121300,"revision":1302,"compact-revision":1058}
	{"level":"info","ts":"2023-09-11T12:29:42.807823Z","caller":"traceutil/trace.go:171","msg":"trace[1337616420] transaction","detail":"{read_only:false; response_revision:1592; number_of_response:1; }","duration":"235.687969ms","start":"2023-09-11T12:29:42.572087Z","end":"2023-09-11T12:29:42.807775Z","steps":["trace[1337616420] 'process raft request'  (duration: 234.984747ms)"],"step_count":1}
	{"level":"info","ts":"2023-09-11T12:29:43.024202Z","caller":"traceutil/trace.go:171","msg":"trace[1172548831] linearizableReadLoop","detail":"{readStateIndex:1884; appliedIndex:1883; }","duration":"180.962457ms","start":"2023-09-11T12:29:42.84322Z","end":"2023-09-11T12:29:43.024182Z","steps":["trace[1172548831] 'read index received'  (duration: 96.125928ms)","trace[1172548831] 'applied index is now lower than readState.Index'  (duration: 84.835709ms)"],"step_count":2}
	{"level":"info","ts":"2023-09-11T12:29:43.024775Z","caller":"traceutil/trace.go:171","msg":"trace[1873119193] transaction","detail":"{read_only:false; response_revision:1593; number_of_response:1; }","duration":"205.613501ms","start":"2023-09-11T12:29:42.819144Z","end":"2023-09-11T12:29:43.024757Z","steps":["trace[1873119193] 'process raft request'  (duration: 120.254662ms)","trace[1873119193] 'compare'  (duration: 84.490156ms)"],"step_count":2}
	{"level":"warn","ts":"2023-09-11T12:29:43.024697Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"181.350487ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/apiregistration.k8s.io/apiservices/\" range_end:\"/registry/apiregistration.k8s.io/apiservices0\" count_only:true ","response":"range_response_count:0 size:7"}
	{"level":"info","ts":"2023-09-11T12:29:43.025342Z","caller":"traceutil/trace.go:171","msg":"trace[1855307186] range","detail":"{range_begin:/registry/apiregistration.k8s.io/apiservices/; range_end:/registry/apiregistration.k8s.io/apiservices0; response_count:0; response_revision:1593; }","duration":"182.128472ms","start":"2023-09-11T12:29:42.843189Z","end":"2023-09-11T12:29:43.025317Z","steps":["trace[1855307186] 'agreement among raft nodes before linearized reading'  (duration: 181.149847ms)"],"step_count":1}
	{"level":"warn","ts":"2023-09-11T12:29:43.291548Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"156.061105ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 serializable:true keys_only:true ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2023-09-11T12:29:43.291767Z","caller":"traceutil/trace.go:171","msg":"trace[1730981382] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:1593; }","duration":"156.307577ms","start":"2023-09-11T12:29:43.135443Z","end":"2023-09-11T12:29:43.291751Z","steps":["trace[1730981382] 'range keys from in-memory index tree'  (duration: 156.033357ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  12:31:17 up 23 min,  0 users,  load average: 0.59, 0.42, 0.31
	Linux default-k8s-diff-port-484027 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [07023f1836d74eec5c3529ad6cdb0d828bd0c7665f6bca932ec72822a5c09d45] <==
	* I0911 12:28:45.490696       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.87.53:443: connect: connection refused
	I0911 12:28:45.490782       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:28:45.645169       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:28:45.645474       1 controller.go:143] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:28:45.646340       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.87.53:443: connect: connection refused
	I0911 12:28:45.646408       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:28:46.646017       1 handler_proxy.go:93] no RequestInfo found in the context
	W0911 12:28:46.646039       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:28:46.646230       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:28:46.646352       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0911 12:28:46.646325       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:28:46.647729       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:29:45.489863       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.87.53:443: connect: connection refused
	I0911 12:29:45.490029       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:29:46.647731       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:29:46.647835       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:29:46.647848       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 12:29:46.648227       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:29:46.648397       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:29:46.650002       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:30:45.489758       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.106.87.53:443: connect: connection refused
	I0911 12:30:45.490108       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	* 
	* ==> kube-controller-manager [169c262446f69da9b20dfc8c4814b9b81e533160f547be5da36e5ee9279710a6] <==
	* I0911 12:25:29.006677       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:25:58.398671       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:25:59.018341       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:26:28.406684       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:26:29.028804       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:26:58.414805       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:26:59.038772       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:27:28.423309       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:27:29.055166       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:27:58.430503       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:27:59.064772       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:28:28.439896       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:28:29.079271       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:28:58.447449       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:28:59.115893       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:29:28.456880       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:29:29.125170       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:29:58.463839       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:29:59.136323       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0911 12:30:20.032183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="2.696529ms"
	E0911 12:30:28.472092       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:30:29.148461       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0911 12:30:35.026295       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="786.001µs"
	E0911 12:30:58.479073       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:30:59.166897       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [08777e80449f23272f7fb879eb08a6a41fdb1db433c41ee24bdc46aa3174c124] <==
	* I0911 12:08:47.790276       1 server_others.go:69] "Using iptables proxy"
	I0911 12:08:47.805450       1 node.go:141] Successfully retrieved node IP: 192.168.39.230
	I0911 12:08:47.870766       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 12:08:47.871077       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 12:08:47.874885       1 server_others.go:152] "Using iptables Proxier"
	I0911 12:08:47.875109       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 12:08:47.875510       1 server.go:846] "Version info" version="v1.28.1"
	I0911 12:08:47.875557       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 12:08:47.877057       1 config.go:188] "Starting service config controller"
	I0911 12:08:47.877121       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 12:08:47.877160       1 config.go:97] "Starting endpoint slice config controller"
	I0911 12:08:47.877198       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 12:08:47.878161       1 config.go:315] "Starting node config controller"
	I0911 12:08:47.878211       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 12:08:47.978314       1 shared_informer.go:318] Caches are synced for node config
	I0911 12:08:47.978374       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 12:08:47.978514       1 shared_informer.go:318] Caches are synced for service config
	
	* 
	* ==> kube-scheduler [fc4e7b5d1258cf90a5a76a54da92293ae4d97aa60225aa310b8cbe9b3e39dff6] <==
	* I0911 12:08:42.292225       1 serving.go:348] Generated self-signed cert in-memory
	W0911 12:08:45.579514       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0911 12:08:45.579603       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 12:08:45.579618       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0911 12:08:45.579625       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0911 12:08:45.640777       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.1"
	I0911 12:08:45.640825       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 12:08:45.647480       1 secure_serving.go:210] Serving securely on 127.0.0.1:10259
	I0911 12:08:45.647673       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0911 12:08:45.647693       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0911 12:08:45.647713       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0911 12:08:45.748207       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 12:08:09 UTC, ends at Mon 2023-09-11 12:31:17 UTC. --
	Sep 11 12:28:38 default-k8s-diff-port-484027 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:28:38 default-k8s-diff-port-484027 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:28:43 default-k8s-diff-port-484027 kubelet[923]: E0911 12:28:43.000883     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:28:57 default-k8s-diff-port-484027 kubelet[923]: E0911 12:28:57.000330     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:29:12 default-k8s-diff-port-484027 kubelet[923]: E0911 12:29:12.000833     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:29:24 default-k8s-diff-port-484027 kubelet[923]: E0911 12:29:24.999495     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:29:37 default-k8s-diff-port-484027 kubelet[923]: E0911 12:29:37.000448     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:29:38 default-k8s-diff-port-484027 kubelet[923]: E0911 12:29:38.020201     923 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:29:38 default-k8s-diff-port-484027 kubelet[923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:29:38 default-k8s-diff-port-484027 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:29:38 default-k8s-diff-port-484027 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:29:51 default-k8s-diff-port-484027 kubelet[923]: E0911 12:29:51.000421     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:30:06 default-k8s-diff-port-484027 kubelet[923]: E0911 12:30:06.026060     923 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 11 12:30:06 default-k8s-diff-port-484027 kubelet[923]: E0911 12:30:06.026153     923 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 11 12:30:06 default-k8s-diff-port-484027 kubelet[923]: E0911 12:30:06.026418     923 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-vb979,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:
&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessag
ePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-tw6td_kube-system(37d0a828-9243-4359-be39-1c2099835e45): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 11 12:30:06 default-k8s-diff-port-484027 kubelet[923]: E0911 12:30:06.026463     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:30:20 default-k8s-diff-port-484027 kubelet[923]: E0911 12:30:20.001809     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:30:35 default-k8s-diff-port-484027 kubelet[923]: E0911 12:30:35.000690     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:30:38 default-k8s-diff-port-484027 kubelet[923]: E0911 12:30:38.015532     923 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:30:38 default-k8s-diff-port-484027 kubelet[923]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:30:38 default-k8s-diff-port-484027 kubelet[923]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:30:38 default-k8s-diff-port-484027 kubelet[923]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:30:46 default-k8s-diff-port-484027 kubelet[923]: E0911 12:30:46.001131     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:31:00 default-k8s-diff-port-484027 kubelet[923]: E0911 12:31:00.999669     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	Sep 11 12:31:15 default-k8s-diff-port-484027 kubelet[923]: E0911 12:31:15.001380     923 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-tw6td" podUID="37d0a828-9243-4359-be39-1c2099835e45"
	
	* 
	* ==> storage-provisioner [8cc82bfb8abe6573ba418652dd95923b4d0a421ed289c395044cf58da3d17fed] <==
	* I0911 12:09:18.433186       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 12:09:18.456298       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 12:09:18.459726       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 12:09:35.869781       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 12:09:35.871418       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-484027_c66fce9b-b030-4618-8623-40f52941e58b!
	I0911 12:09:35.870351       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"314be2a5-1789-42e0-a9e6-b1e42a2502da", APIVersion:"v1", ResourceVersion:"598", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-484027_c66fce9b-b030-4618-8623-40f52941e58b became leader
	I0911 12:09:35.971811       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-484027_c66fce9b-b030-4618-8623-40f52941e58b!
	
	* 
	* ==> storage-provisioner [f5464e92c81e84a3fa7c122d8da49d4ea2ec9c0600cdd5a9fc37500491a7a329] <==
	* I0911 12:08:47.516453       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0911 12:09:17.521878       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-484027 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-tw6td
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-484027 describe pod metrics-server-57f55c9bc5-tw6td
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-484027 describe pod metrics-server-57f55c9bc5-tw6td: exit status 1 (86.048709ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-tw6td" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-484027 describe pod metrics-server-57f55c9bc5-tw6td: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (542.55s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (221.92s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0911 12:23:47.569858 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 12:24:15.053567 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 12:26:22.842127 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-352076 -n no-preload-352076
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2023-09-11 12:27:26.546758637 +0000 UTC m=+5451.859383520
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-352076 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-352076 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (2.221µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-352076 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-352076 -n no-preload-352076
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-352076 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-352076 logs -n 25: (1.277781145s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cert-options-559775 -- sudo                         | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	|         | cat /etc/kubernetes/admin.conf                         |                              |         |         |                     |                     |
	| delete  | -p cert-options-559775                                 | cert-options-559775          | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:58 UTC |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 11:58 UTC | 11 Sep 23 11:59 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-352076             | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 11:59 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-235462            | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-642215        | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:00 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| start   | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:00 UTC | 11 Sep 23 12:01 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-758549                              | cert-expiration-758549       | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	| delete  | -p                                                     | disable-driver-mounts-226537 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:01 UTC |
	|         | disable-driver-mounts-226537                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:01 UTC | 11 Sep 23 12:02 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-352076                  | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-484027  | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:02 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-235462                 | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-352076                                   | no-preload-352076            | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:14 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --preload=false --driver=kvm2                          |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| start   | -p embed-certs-235462                                  | embed-certs-235462           | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:13 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-642215             | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:02 UTC | 11 Sep 23 12:08 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.16.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-484027       | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-484027 | jenkins | v1.31.2 | 11 Sep 23 12:04 UTC | 11 Sep 23 12:13 UTC |
	|         | default-k8s-diff-port-484027                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-642215                              | old-k8s-version-642215       | jenkins | v1.31.2 | 11 Sep 23 12:26 UTC | 11 Sep 23 12:26 UTC |
	| start   | -p newest-cni-867563 --memory=2200 --alsologtostderr   | newest-cni-867563            | jenkins | v1.31.2 | 11 Sep 23 12:26 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.1                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 12:26:42
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 12:26:42.841400 2261081 out.go:296] Setting OutFile to fd 1 ...
	I0911 12:26:42.841536 2261081 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:26:42.841542 2261081 out.go:309] Setting ErrFile to fd 2...
	I0911 12:26:42.841546 2261081 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 12:26:42.841756 2261081 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 12:26:42.842424 2261081 out.go:303] Setting JSON to false
	I0911 12:26:42.843491 2261081 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":238154,"bootTime":1694197049,"procs":209,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 12:26:42.843561 2261081 start.go:138] virtualization: kvm guest
	I0911 12:26:42.846722 2261081 out.go:177] * [newest-cni-867563] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 12:26:42.848867 2261081 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 12:26:42.848884 2261081 notify.go:220] Checking for updates...
	I0911 12:26:42.850745 2261081 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 12:26:42.852549 2261081 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 12:26:42.854436 2261081 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 12:26:42.856328 2261081 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 12:26:42.858090 2261081 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 12:26:42.861494 2261081 config.go:182] Loaded profile config "default-k8s-diff-port-484027": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:26:42.861888 2261081 config.go:182] Loaded profile config "embed-certs-235462": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:26:42.862016 2261081 config.go:182] Loaded profile config "no-preload-352076": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:26:42.862177 2261081 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 12:26:42.903674 2261081 out.go:177] * Using the kvm2 driver based on user configuration
	I0911 12:26:42.905427 2261081 start.go:298] selected driver: kvm2
	I0911 12:26:42.905456 2261081 start.go:902] validating driver "kvm2" against <nil>
	I0911 12:26:42.905471 2261081 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 12:26:42.906313 2261081 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:26:42.906419 2261081 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 12:26:42.923679 2261081 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 12:26:42.923821 2261081 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	W0911 12:26:42.923853 2261081 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0911 12:26:42.924124 2261081 start_flags.go:941] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0911 12:26:42.924168 2261081 cni.go:84] Creating CNI manager for ""
	I0911 12:26:42.924179 2261081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:26:42.924198 2261081 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 12:26:42.924210 2261081 start_flags.go:321] config:
	{Name:newest-cni-867563 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-867563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:cr
io CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSH
AuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:26:42.924476 2261081 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 12:26:42.927305 2261081 out.go:177] * Starting control plane node newest-cni-867563 in cluster newest-cni-867563
	I0911 12:26:42.929559 2261081 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:26:42.929657 2261081 preload.go:148] Found local preload: /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4
	I0911 12:26:42.929671 2261081 cache.go:57] Caching tarball of preloaded images
	I0911 12:26:42.929782 2261081 preload.go:174] Found /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0911 12:26:42.929804 2261081 cache.go:60] Finished verifying existence of preloaded tar for  v1.28.1 on crio
	I0911 12:26:42.929951 2261081 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/config.json ...
	I0911 12:26:42.929978 2261081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/config.json: {Name:mk9618756c39275de4de855fa36adf111a451e4e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:26:42.930193 2261081 start.go:365] acquiring machines lock for newest-cni-867563: {Name:mk4cb70223c227bb43bb0b05d1db0b50a4f38f3e Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0911 12:26:42.930235 2261081 start.go:369] acquired machines lock for "newest-cni-867563" in 20.99µs
	I0911 12:26:42.930263 2261081 start.go:93] Provisioning new machine with config: &{Name:newest-cni-867563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.1 ClusterName:newest-cni-867563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenki
ns:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0911 12:26:42.930330 2261081 start.go:125] createHost starting for "" (driver="kvm2")
	I0911 12:26:42.932507 2261081 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0911 12:26:42.932725 2261081 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 12:26:42.932794 2261081 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 12:26:42.948249 2261081 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46713
	I0911 12:26:42.948872 2261081 main.go:141] libmachine: () Calling .GetVersion
	I0911 12:26:42.949476 2261081 main.go:141] libmachine: Using API Version  1
	I0911 12:26:42.949498 2261081 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 12:26:42.949977 2261081 main.go:141] libmachine: () Calling .GetMachineName
	I0911 12:26:42.950224 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetMachineName
	I0911 12:26:42.950451 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .DriverName
	I0911 12:26:42.950670 2261081 start.go:159] libmachine.API.Create for "newest-cni-867563" (driver="kvm2")
	I0911 12:26:42.950705 2261081 client.go:168] LocalClient.Create starting
	I0911 12:26:42.950752 2261081 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem
	I0911 12:26:42.950801 2261081 main.go:141] libmachine: Decoding PEM data...
	I0911 12:26:42.950825 2261081 main.go:141] libmachine: Parsing certificate...
	I0911 12:26:42.950915 2261081 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem
	I0911 12:26:42.950950 2261081 main.go:141] libmachine: Decoding PEM data...
	I0911 12:26:42.950968 2261081 main.go:141] libmachine: Parsing certificate...
	I0911 12:26:42.951085 2261081 main.go:141] libmachine: Running pre-create checks...
	I0911 12:26:42.951114 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .PreCreateCheck
	I0911 12:26:42.951527 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetConfigRaw
	I0911 12:26:42.952030 2261081 main.go:141] libmachine: Creating machine...
	I0911 12:26:42.952047 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .Create
	I0911 12:26:42.952202 2261081 main.go:141] libmachine: (newest-cni-867563) Creating KVM machine...
	I0911 12:26:42.953637 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found existing default KVM network
	I0911 12:26:42.955038 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:42.954841 2261104 network.go:214] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:92:62:7e} reservation:<nil>}
	I0911 12:26:42.956034 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:42.955918 2261104 network.go:214] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr4 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:c8:2a:fc} reservation:<nil>}
	I0911 12:26:42.957326 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:42.957248 2261104 network.go:209] using free private subnet 192.168.61.0/24: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00032c950}
	I0911 12:26:42.964208 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | trying to create private KVM network mk-newest-cni-867563 192.168.61.0/24...
	I0911 12:26:43.067267 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | private KVM network mk-newest-cni-867563 192.168.61.0/24 created
	I0911 12:26:43.067317 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:43.067195 2261104 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 12:26:43.067332 2261081 main.go:141] libmachine: (newest-cni-867563) Setting up store path in /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563 ...
	I0911 12:26:43.067351 2261081 main.go:141] libmachine: (newest-cni-867563) Building disk image from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0911 12:26:43.067488 2261081 main.go:141] libmachine: (newest-cni-867563) Downloading /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso...
	I0911 12:26:43.331391 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:43.331232 2261104 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563/id_rsa...
	I0911 12:26:43.642468 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:43.642266 2261104 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563/newest-cni-867563.rawdisk...
	I0911 12:26:43.642514 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Writing magic tar header
	I0911 12:26:43.642530 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Writing SSH key tar header
	I0911 12:26:43.642542 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:43.642448 2261104 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563 ...
	I0911 12:26:43.642648 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563
	I0911 12:26:43.642709 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines
	I0911 12:26:43.642724 2261081 main.go:141] libmachine: (newest-cni-867563) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563 (perms=drwx------)
	I0911 12:26:43.642759 2261081 main.go:141] libmachine: (newest-cni-867563) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube/machines (perms=drwxr-xr-x)
	I0911 12:26:43.642781 2261081 main.go:141] libmachine: (newest-cni-867563) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273/.minikube (perms=drwxr-xr-x)
	I0911 12:26:43.642795 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 12:26:43.642813 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/17223-2215273
	I0911 12:26:43.642833 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0911 12:26:43.642850 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Checking permissions on dir: /home/jenkins
	I0911 12:26:43.642865 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Checking permissions on dir: /home
	I0911 12:26:43.642880 2261081 main.go:141] libmachine: (newest-cni-867563) Setting executable bit set on /home/jenkins/minikube-integration/17223-2215273 (perms=drwxrwxr-x)
	I0911 12:26:43.642894 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Skipping /home - not owner
	I0911 12:26:43.642915 2261081 main.go:141] libmachine: (newest-cni-867563) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0911 12:26:43.642930 2261081 main.go:141] libmachine: (newest-cni-867563) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0911 12:26:43.642955 2261081 main.go:141] libmachine: (newest-cni-867563) Creating domain...
	I0911 12:26:43.644267 2261081 main.go:141] libmachine: (newest-cni-867563) define libvirt domain using xml: 
	I0911 12:26:43.644294 2261081 main.go:141] libmachine: (newest-cni-867563) <domain type='kvm'>
	I0911 12:26:43.644302 2261081 main.go:141] libmachine: (newest-cni-867563)   <name>newest-cni-867563</name>
	I0911 12:26:43.644321 2261081 main.go:141] libmachine: (newest-cni-867563)   <memory unit='MiB'>2200</memory>
	I0911 12:26:43.644331 2261081 main.go:141] libmachine: (newest-cni-867563)   <vcpu>2</vcpu>
	I0911 12:26:43.644337 2261081 main.go:141] libmachine: (newest-cni-867563)   <features>
	I0911 12:26:43.644346 2261081 main.go:141] libmachine: (newest-cni-867563)     <acpi/>
	I0911 12:26:43.644353 2261081 main.go:141] libmachine: (newest-cni-867563)     <apic/>
	I0911 12:26:43.644359 2261081 main.go:141] libmachine: (newest-cni-867563)     <pae/>
	I0911 12:26:43.644367 2261081 main.go:141] libmachine: (newest-cni-867563)     
	I0911 12:26:43.644373 2261081 main.go:141] libmachine: (newest-cni-867563)   </features>
	I0911 12:26:43.644380 2261081 main.go:141] libmachine: (newest-cni-867563)   <cpu mode='host-passthrough'>
	I0911 12:26:43.644423 2261081 main.go:141] libmachine: (newest-cni-867563)   
	I0911 12:26:43.644455 2261081 main.go:141] libmachine: (newest-cni-867563)   </cpu>
	I0911 12:26:43.644471 2261081 main.go:141] libmachine: (newest-cni-867563)   <os>
	I0911 12:26:43.644488 2261081 main.go:141] libmachine: (newest-cni-867563)     <type>hvm</type>
	I0911 12:26:43.644503 2261081 main.go:141] libmachine: (newest-cni-867563)     <boot dev='cdrom'/>
	I0911 12:26:43.644520 2261081 main.go:141] libmachine: (newest-cni-867563)     <boot dev='hd'/>
	I0911 12:26:43.644534 2261081 main.go:141] libmachine: (newest-cni-867563)     <bootmenu enable='no'/>
	I0911 12:26:43.644546 2261081 main.go:141] libmachine: (newest-cni-867563)   </os>
	I0911 12:26:43.644555 2261081 main.go:141] libmachine: (newest-cni-867563)   <devices>
	I0911 12:26:43.644564 2261081 main.go:141] libmachine: (newest-cni-867563)     <disk type='file' device='cdrom'>
	I0911 12:26:43.644579 2261081 main.go:141] libmachine: (newest-cni-867563)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563/boot2docker.iso'/>
	I0911 12:26:43.644593 2261081 main.go:141] libmachine: (newest-cni-867563)       <target dev='hdc' bus='scsi'/>
	I0911 12:26:43.644626 2261081 main.go:141] libmachine: (newest-cni-867563)       <readonly/>
	I0911 12:26:43.644642 2261081 main.go:141] libmachine: (newest-cni-867563)     </disk>
	I0911 12:26:43.644651 2261081 main.go:141] libmachine: (newest-cni-867563)     <disk type='file' device='disk'>
	I0911 12:26:43.644672 2261081 main.go:141] libmachine: (newest-cni-867563)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0911 12:26:43.644685 2261081 main.go:141] libmachine: (newest-cni-867563)       <source file='/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563/newest-cni-867563.rawdisk'/>
	I0911 12:26:43.644694 2261081 main.go:141] libmachine: (newest-cni-867563)       <target dev='hda' bus='virtio'/>
	I0911 12:26:43.644700 2261081 main.go:141] libmachine: (newest-cni-867563)     </disk>
	I0911 12:26:43.644708 2261081 main.go:141] libmachine: (newest-cni-867563)     <interface type='network'>
	I0911 12:26:43.644717 2261081 main.go:141] libmachine: (newest-cni-867563)       <source network='mk-newest-cni-867563'/>
	I0911 12:26:43.644728 2261081 main.go:141] libmachine: (newest-cni-867563)       <model type='virtio'/>
	I0911 12:26:43.644756 2261081 main.go:141] libmachine: (newest-cni-867563)     </interface>
	I0911 12:26:43.644777 2261081 main.go:141] libmachine: (newest-cni-867563)     <interface type='network'>
	I0911 12:26:43.644792 2261081 main.go:141] libmachine: (newest-cni-867563)       <source network='default'/>
	I0911 12:26:43.644805 2261081 main.go:141] libmachine: (newest-cni-867563)       <model type='virtio'/>
	I0911 12:26:43.644843 2261081 main.go:141] libmachine: (newest-cni-867563)     </interface>
	I0911 12:26:43.644860 2261081 main.go:141] libmachine: (newest-cni-867563)     <serial type='pty'>
	I0911 12:26:43.644872 2261081 main.go:141] libmachine: (newest-cni-867563)       <target port='0'/>
	I0911 12:26:43.644884 2261081 main.go:141] libmachine: (newest-cni-867563)     </serial>
	I0911 12:26:43.644897 2261081 main.go:141] libmachine: (newest-cni-867563)     <console type='pty'>
	I0911 12:26:43.644918 2261081 main.go:141] libmachine: (newest-cni-867563)       <target type='serial' port='0'/>
	I0911 12:26:43.644938 2261081 main.go:141] libmachine: (newest-cni-867563)     </console>
	I0911 12:26:43.644956 2261081 main.go:141] libmachine: (newest-cni-867563)     <rng model='virtio'>
	I0911 12:26:43.644972 2261081 main.go:141] libmachine: (newest-cni-867563)       <backend model='random'>/dev/random</backend>
	I0911 12:26:43.644984 2261081 main.go:141] libmachine: (newest-cni-867563)     </rng>
	I0911 12:26:43.644997 2261081 main.go:141] libmachine: (newest-cni-867563)     
	I0911 12:26:43.645011 2261081 main.go:141] libmachine: (newest-cni-867563)     
	I0911 12:26:43.645025 2261081 main.go:141] libmachine: (newest-cni-867563)   </devices>
	I0911 12:26:43.645041 2261081 main.go:141] libmachine: (newest-cni-867563) </domain>
	I0911 12:26:43.645057 2261081 main.go:141] libmachine: (newest-cni-867563) 
	I0911 12:26:43.649983 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:1e:6e:bf in network default
	I0911 12:26:43.650714 2261081 main.go:141] libmachine: (newest-cni-867563) Ensuring networks are active...
	I0911 12:26:43.650745 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:43.651487 2261081 main.go:141] libmachine: (newest-cni-867563) Ensuring network default is active
	I0911 12:26:43.651800 2261081 main.go:141] libmachine: (newest-cni-867563) Ensuring network mk-newest-cni-867563 is active
	I0911 12:26:43.652387 2261081 main.go:141] libmachine: (newest-cni-867563) Getting domain xml...
	I0911 12:26:43.653219 2261081 main.go:141] libmachine: (newest-cni-867563) Creating domain...
	I0911 12:26:45.003843 2261081 main.go:141] libmachine: (newest-cni-867563) Waiting to get IP...
	I0911 12:26:45.004908 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:45.005488 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:45.005527 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:45.005482 2261104 retry.go:31] will retry after 217.341845ms: waiting for machine to come up
	I0911 12:26:45.225145 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:45.225796 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:45.225822 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:45.225748 2261104 retry.go:31] will retry after 294.263978ms: waiting for machine to come up
	I0911 12:26:45.521272 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:45.521673 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:45.521727 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:45.521633 2261104 retry.go:31] will retry after 487.319953ms: waiting for machine to come up
	I0911 12:26:46.011289 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:46.011972 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:46.012004 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:46.011918 2261104 retry.go:31] will retry after 421.269542ms: waiting for machine to come up
	I0911 12:26:46.434415 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:46.434875 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:46.434904 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:46.434835 2261104 retry.go:31] will retry after 739.617394ms: waiting for machine to come up
	I0911 12:26:47.176189 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:47.176673 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:47.176702 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:47.176591 2261104 retry.go:31] will retry after 765.701386ms: waiting for machine to come up
	I0911 12:26:47.943596 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:47.944088 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:47.944118 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:47.944053 2261104 retry.go:31] will retry after 730.547114ms: waiting for machine to come up
	I0911 12:26:48.676382 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:48.676835 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:48.676872 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:48.676779 2261104 retry.go:31] will retry after 1.012431834s: waiting for machine to come up
	I0911 12:26:49.691499 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:49.692047 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:49.692074 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:49.691993 2261104 retry.go:31] will retry after 1.844917524s: waiting for machine to come up
	I0911 12:26:51.539406 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:51.539894 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:51.539928 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:51.539859 2261104 retry.go:31] will retry after 2.251430685s: waiting for machine to come up
	I0911 12:26:53.792760 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:53.793379 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:53.793441 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:53.793280 2261104 retry.go:31] will retry after 1.945321358s: waiting for machine to come up
	I0911 12:26:55.740020 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:55.740729 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:55.740768 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:55.740615 2261104 retry.go:31] will retry after 3.29242501s: waiting for machine to come up
	I0911 12:26:59.035255 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:26:59.035921 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:26:59.035951 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:26:59.035861 2261104 retry.go:31] will retry after 3.425419409s: waiting for machine to come up
	I0911 12:27:02.465523 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:02.465960 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find current IP address of domain newest-cni-867563 in network mk-newest-cni-867563
	I0911 12:27:02.465997 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | I0911 12:27:02.465895 2261104 retry.go:31] will retry after 5.275255982s: waiting for machine to come up
	I0911 12:27:07.746051 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:07.746442 2261081 main.go:141] libmachine: (newest-cni-867563) Found IP for machine: 192.168.61.4
	I0911 12:27:07.746494 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has current primary IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:07.746509 2261081 main.go:141] libmachine: (newest-cni-867563) Reserving static IP address...
	I0911 12:27:07.746913 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | unable to find host DHCP lease matching {name: "newest-cni-867563", mac: "52:54:00:ca:c6:3b", ip: "192.168.61.4"} in network mk-newest-cni-867563
	I0911 12:27:07.844910 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Getting to WaitForSSH function...
	I0911 12:27:07.844940 2261081 main.go:141] libmachine: (newest-cni-867563) Reserved static IP address: 192.168.61.4
	I0911 12:27:07.844955 2261081 main.go:141] libmachine: (newest-cni-867563) Waiting for SSH to be available...
	I0911 12:27:07.848705 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:07.849223 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:minikube Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:07.849255 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:07.849391 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Using SSH client type: external
	I0911 12:27:07.849415 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Using SSH private key: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563/id_rsa (-rw-------)
	I0911 12:27:07.849448 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.4 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0911 12:27:07.849485 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | About to run SSH command:
	I0911 12:27:07.849502 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | exit 0
	I0911 12:27:07.945261 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | SSH cmd err, output: <nil>: 
	I0911 12:27:07.945554 2261081 main.go:141] libmachine: (newest-cni-867563) KVM machine creation complete!
	I0911 12:27:07.945998 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetConfigRaw
	I0911 12:27:07.946634 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .DriverName
	I0911 12:27:07.946871 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .DriverName
	I0911 12:27:07.947080 2261081 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0911 12:27:07.947100 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetState
	I0911 12:27:07.948741 2261081 main.go:141] libmachine: Detecting operating system of created instance...
	I0911 12:27:07.948761 2261081 main.go:141] libmachine: Waiting for SSH to be available...
	I0911 12:27:07.948774 2261081 main.go:141] libmachine: Getting to WaitForSSH function...
	I0911 12:27:07.948784 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHHostname
	I0911 12:27:07.951632 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:07.952108 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:07.952149 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:07.952284 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHPort
	I0911 12:27:07.952474 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:07.952645 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:07.952866 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHUsername
	I0911 12:27:07.953051 2261081 main.go:141] libmachine: Using SSH client type: native
	I0911 12:27:07.953537 2261081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.4 22 <nil> <nil>}
	I0911 12:27:07.953560 2261081 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0911 12:27:08.076668 2261081 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:27:08.076705 2261081 main.go:141] libmachine: Detecting the provisioner...
	I0911 12:27:08.076718 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHHostname
	I0911 12:27:08.079974 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.080349 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:08.080387 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.080583 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHPort
	I0911 12:27:08.080889 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:08.081082 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:08.081291 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHUsername
	I0911 12:27:08.081512 2261081 main.go:141] libmachine: Using SSH client type: native
	I0911 12:27:08.081971 2261081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.4 22 <nil> <nil>}
	I0911 12:27:08.081985 2261081 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0911 12:27:08.210302 2261081 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2021.02.12-1-g88b5c50-dirty
	ID=buildroot
	VERSION_ID=2021.02.12
	PRETTY_NAME="Buildroot 2021.02.12"
	
	I0911 12:27:08.210447 2261081 main.go:141] libmachine: found compatible host: buildroot
	I0911 12:27:08.210471 2261081 main.go:141] libmachine: Provisioning with buildroot...
	I0911 12:27:08.210488 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetMachineName
	I0911 12:27:08.210802 2261081 buildroot.go:166] provisioning hostname "newest-cni-867563"
	I0911 12:27:08.210845 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetMachineName
	I0911 12:27:08.211085 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHHostname
	I0911 12:27:08.214113 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.214536 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:08.214567 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.214744 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHPort
	I0911 12:27:08.214969 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:08.215155 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:08.215318 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHUsername
	I0911 12:27:08.215555 2261081 main.go:141] libmachine: Using SSH client type: native
	I0911 12:27:08.215970 2261081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.4 22 <nil> <nil>}
	I0911 12:27:08.215990 2261081 main.go:141] libmachine: About to run SSH command:
	sudo hostname newest-cni-867563 && echo "newest-cni-867563" | sudo tee /etc/hostname
	I0911 12:27:08.352413 2261081 main.go:141] libmachine: SSH cmd err, output: <nil>: newest-cni-867563
	
	I0911 12:27:08.352455 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHHostname
	I0911 12:27:08.355771 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.356187 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:08.356227 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.356522 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHPort
	I0911 12:27:08.356767 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:08.356971 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:08.357118 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHUsername
	I0911 12:27:08.357289 2261081 main.go:141] libmachine: Using SSH client type: native
	I0911 12:27:08.357726 2261081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.4 22 <nil> <nil>}
	I0911 12:27:08.357752 2261081 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\snewest-cni-867563' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 newest-cni-867563/g' /etc/hosts;
				else 
					echo '127.0.1.1 newest-cni-867563' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0911 12:27:08.491274 2261081 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0911 12:27:08.491350 2261081 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/17223-2215273/.minikube CaCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/17223-2215273/.minikube}
	I0911 12:27:08.491404 2261081 buildroot.go:174] setting up certificates
	I0911 12:27:08.491427 2261081 provision.go:83] configureAuth start
	I0911 12:27:08.491455 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetMachineName
	I0911 12:27:08.491810 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetIP
	I0911 12:27:08.494884 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.495298 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:08.495329 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.495531 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHHostname
	I0911 12:27:08.498060 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.498448 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:08.498479 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.498699 2261081 provision.go:138] copyHostCerts
	I0911 12:27:08.498764 2261081 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem, removing ...
	I0911 12:27:08.498774 2261081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem
	I0911 12:27:08.498849 2261081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.pem (1082 bytes)
	I0911 12:27:08.498961 2261081 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem, removing ...
	I0911 12:27:08.498970 2261081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem
	I0911 12:27:08.498997 2261081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/cert.pem (1123 bytes)
	I0911 12:27:08.499064 2261081 exec_runner.go:144] found /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem, removing ...
	I0911 12:27:08.499072 2261081 exec_runner.go:203] rm: /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem
	I0911 12:27:08.499091 2261081 exec_runner.go:151] cp: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/17223-2215273/.minikube/key.pem (1679 bytes)
	I0911 12:27:08.499161 2261081 provision.go:112] generating server cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem org=jenkins.newest-cni-867563 san=[192.168.61.4 192.168.61.4 localhost 127.0.0.1 minikube newest-cni-867563]
	I0911 12:27:08.610067 2261081 provision.go:172] copyRemoteCerts
	I0911 12:27:08.610140 2261081 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0911 12:27:08.610169 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHHostname
	I0911 12:27:08.612889 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.613333 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:08.613371 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.613615 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHPort
	I0911 12:27:08.613849 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:08.614031 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHUsername
	I0911 12:27:08.614187 2261081 sshutil.go:53] new ssh client: &{IP:192.168.61.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563/id_rsa Username:docker}
	I0911 12:27:08.706760 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0911 12:27:08.733531 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0911 12:27:08.759837 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0911 12:27:08.784385 2261081 provision.go:86] duration metric: configureAuth took 292.929327ms
	I0911 12:27:08.784422 2261081 buildroot.go:189] setting minikube options for container-runtime
	I0911 12:27:08.784693 2261081 config.go:182] Loaded profile config "newest-cni-867563": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 12:27:08.784805 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHHostname
	I0911 12:27:08.787640 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.788079 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:08.788117 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:08.788269 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHPort
	I0911 12:27:08.788494 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:08.788746 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:08.788980 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHUsername
	I0911 12:27:08.789197 2261081 main.go:141] libmachine: Using SSH client type: native
	I0911 12:27:08.789672 2261081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.4 22 <nil> <nil>}
	I0911 12:27:08.789691 2261081 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0911 12:27:09.140657 2261081 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0911 12:27:09.140691 2261081 main.go:141] libmachine: Checking connection to Docker...
	I0911 12:27:09.140711 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetURL
	I0911 12:27:09.142256 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | Using libvirt version 6000000
	I0911 12:27:09.144683 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.145124 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:09.145168 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.145396 2261081 main.go:141] libmachine: Docker is up and running!
	I0911 12:27:09.145412 2261081 main.go:141] libmachine: Reticulating splines...
	I0911 12:27:09.145419 2261081 client.go:171] LocalClient.Create took 26.194703627s
	I0911 12:27:09.145442 2261081 start.go:167] duration metric: libmachine.API.Create for "newest-cni-867563" took 26.194773059s
	I0911 12:27:09.145457 2261081 start.go:300] post-start starting for "newest-cni-867563" (driver="kvm2")
	I0911 12:27:09.145478 2261081 start.go:329] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0911 12:27:09.145505 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .DriverName
	I0911 12:27:09.145796 2261081 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0911 12:27:09.145830 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHHostname
	I0911 12:27:09.148218 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.148619 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:09.148652 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.148860 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHPort
	I0911 12:27:09.149063 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:09.149246 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHUsername
	I0911 12:27:09.149421 2261081 sshutil.go:53] new ssh client: &{IP:192.168.61.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563/id_rsa Username:docker}
	I0911 12:27:09.243614 2261081 ssh_runner.go:195] Run: cat /etc/os-release
	I0911 12:27:09.248394 2261081 info.go:137] Remote host: Buildroot 2021.02.12
	I0911 12:27:09.248425 2261081 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/addons for local assets ...
	I0911 12:27:09.248503 2261081 filesync.go:126] Scanning /home/jenkins/minikube-integration/17223-2215273/.minikube/files for local assets ...
	I0911 12:27:09.248627 2261081 filesync.go:149] local asset: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem -> 22224712.pem in /etc/ssl/certs
	I0911 12:27:09.248754 2261081 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0911 12:27:09.259038 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:27:09.286033 2261081 start.go:303] post-start completed in 140.556313ms
	I0911 12:27:09.286100 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetConfigRaw
	I0911 12:27:09.286813 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetIP
	I0911 12:27:09.289629 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.289970 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:09.290002 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.290421 2261081 profile.go:148] Saving config to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/config.json ...
	I0911 12:27:09.290634 2261081 start.go:128] duration metric: createHost completed in 26.360290386s
	I0911 12:27:09.290689 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHHostname
	I0911 12:27:09.293373 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.293736 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:09.293780 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.293919 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHPort
	I0911 12:27:09.294158 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:09.294333 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:09.294533 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHUsername
	I0911 12:27:09.294741 2261081 main.go:141] libmachine: Using SSH client type: native
	I0911 12:27:09.295148 2261081 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x80ff00] 0x812fa0 <nil>  [] 0s} 192.168.61.4 22 <nil> <nil>}
	I0911 12:27:09.295161 2261081 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0911 12:27:09.419051 2261081 main.go:141] libmachine: SSH cmd err, output: <nil>: 1694435229.398993965
	
	I0911 12:27:09.419078 2261081 fix.go:206] guest clock: 1694435229.398993965
	I0911 12:27:09.419090 2261081 fix.go:219] Guest: 2023-09-11 12:27:09.398993965 +0000 UTC Remote: 2023-09-11 12:27:09.290653637 +0000 UTC m=+26.487075829 (delta=108.340328ms)
	I0911 12:27:09.419119 2261081 fix.go:190] guest clock delta is within tolerance: 108.340328ms
	I0911 12:27:09.419142 2261081 start.go:83] releasing machines lock for "newest-cni-867563", held for 26.48888193s
	I0911 12:27:09.419176 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .DriverName
	I0911 12:27:09.419552 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetIP
	I0911 12:27:09.422545 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.422949 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:09.422996 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.423185 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .DriverName
	I0911 12:27:09.423756 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .DriverName
	I0911 12:27:09.423962 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .DriverName
	I0911 12:27:09.424077 2261081 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0911 12:27:09.424134 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHHostname
	I0911 12:27:09.424249 2261081 ssh_runner.go:195] Run: cat /version.json
	I0911 12:27:09.424279 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHHostname
	I0911 12:27:09.427397 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.427627 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.427904 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:09.427969 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.428034 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:09.428057 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:09.428101 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHPort
	I0911 12:27:09.428293 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHPort
	I0911 12:27:09.428303 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:09.428508 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHUsername
	I0911 12:27:09.428514 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHKeyPath
	I0911 12:27:09.428750 2261081 sshutil.go:53] new ssh client: &{IP:192.168.61.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563/id_rsa Username:docker}
	I0911 12:27:09.428852 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetSSHUsername
	I0911 12:27:09.429018 2261081 sshutil.go:53] new ssh client: &{IP:192.168.61.4 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/newest-cni-867563/id_rsa Username:docker}
	I0911 12:27:09.515664 2261081 ssh_runner.go:195] Run: systemctl --version
	I0911 12:27:09.550916 2261081 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0911 12:27:09.722218 2261081 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0911 12:27:09.729270 2261081 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0911 12:27:09.729368 2261081 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0911 12:27:09.748052 2261081 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0911 12:27:09.748084 2261081 start.go:466] detecting cgroup driver to use...
	I0911 12:27:09.748172 2261081 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0911 12:27:09.763233 2261081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0911 12:27:09.777102 2261081 docker.go:196] disabling cri-docker service (if available) ...
	I0911 12:27:09.777180 2261081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0911 12:27:09.791995 2261081 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0911 12:27:09.807450 2261081 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0911 12:27:09.921588 2261081 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0911 12:27:10.044400 2261081 docker.go:212] disabling docker service ...
	I0911 12:27:10.044481 2261081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0911 12:27:10.060539 2261081 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0911 12:27:10.074761 2261081 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0911 12:27:10.192082 2261081 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0911 12:27:10.317848 2261081 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0911 12:27:10.332943 2261081 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0911 12:27:10.354322 2261081 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0911 12:27:10.354394 2261081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:27:10.366346 2261081 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0911 12:27:10.366443 2261081 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:27:10.379559 2261081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:27:10.392569 2261081 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0911 12:27:10.405414 2261081 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0911 12:27:10.418726 2261081 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0911 12:27:10.430232 2261081 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0911 12:27:10.430297 2261081 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0911 12:27:10.445239 2261081 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0911 12:27:10.455845 2261081 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0911 12:27:10.572058 2261081 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0911 12:27:10.780236 2261081 start.go:513] Will wait 60s for socket path /var/run/crio/crio.sock
	I0911 12:27:10.780319 2261081 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0911 12:27:10.786047 2261081 start.go:534] Will wait 60s for crictl version
	I0911 12:27:10.786129 2261081 ssh_runner.go:195] Run: which crictl
	I0911 12:27:10.790550 2261081 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0911 12:27:10.834346 2261081 start.go:550] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.1
	RuntimeApiVersion:  v1alpha2
	I0911 12:27:10.834461 2261081 ssh_runner.go:195] Run: crio --version
	I0911 12:27:10.882078 2261081 ssh_runner.go:195] Run: crio --version
	I0911 12:27:10.944035 2261081 out.go:177] * Preparing Kubernetes v1.28.1 on CRI-O 1.24.1 ...
	I0911 12:27:10.945785 2261081 main.go:141] libmachine: (newest-cni-867563) Calling .GetIP
	I0911 12:27:10.949130 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:10.949484 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:ca:c6:3b", ip: ""} in network mk-newest-cni-867563: {Iface:virbr3 ExpiryTime:2023-09-11 13:27:00 +0000 UTC Type:0 Mac:52:54:00:ca:c6:3b Iaid: IPaddr:192.168.61.4 Prefix:24 Hostname:newest-cni-867563 Clientid:01:52:54:00:ca:c6:3b}
	I0911 12:27:10.949522 2261081 main.go:141] libmachine: (newest-cni-867563) DBG | domain newest-cni-867563 has defined IP address 192.168.61.4 and MAC address 52:54:00:ca:c6:3b in network mk-newest-cni-867563
	I0911 12:27:10.949797 2261081 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0911 12:27:10.954668 2261081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:27:10.969465 2261081 localpath.go:92] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/client.crt -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/client.crt
	I0911 12:27:10.969646 2261081 localpath.go:117] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/client.key -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/client.key
	I0911 12:27:10.972052 2261081 out.go:177]   - kubeadm.pod-network-cidr=10.42.0.0/16
	I0911 12:27:10.973985 2261081 preload.go:132] Checking if preload exists for k8s version v1.28.1 and runtime crio
	I0911 12:27:10.974108 2261081 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:27:11.004657 2261081 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.1". assuming images are not preloaded.
	I0911 12:27:11.004745 2261081 ssh_runner.go:195] Run: which lz4
	I0911 12:27:11.009228 2261081 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0911 12:27:11.014425 2261081 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0911 12:27:11.014466 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.1-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (457053495 bytes)
	I0911 12:27:13.064847 2261081 crio.go:444] Took 2.055617 seconds to copy over tarball
	I0911 12:27:13.064977 2261081 ssh_runner.go:195] Run: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4
	I0911 12:27:16.269121 2261081 ssh_runner.go:235] Completed: sudo tar -I lz4 -C /var -xf /preloaded.tar.lz4: (3.204095818s)
	I0911 12:27:16.269161 2261081 crio.go:451] Took 3.204278 seconds to extract the tarball
	I0911 12:27:16.269172 2261081 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0911 12:27:16.314960 2261081 ssh_runner.go:195] Run: sudo crictl images --output json
	I0911 12:27:16.392625 2261081 crio.go:496] all images are preloaded for cri-o runtime.
	I0911 12:27:16.392651 2261081 cache_images.go:84] Images are preloaded, skipping loading
	I0911 12:27:16.392727 2261081 ssh_runner.go:195] Run: crio config
	I0911 12:27:16.466221 2261081 cni.go:84] Creating CNI manager for ""
	I0911 12:27:16.466249 2261081 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 12:27:16.466274 2261081 kubeadm.go:87] Using pod CIDR: 10.42.0.0/16
	I0911 12:27:16.466299 2261081 kubeadm.go:176] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.42.0.0/16 AdvertiseAddress:192.168.61.4 APIServerPort:8443 KubernetesVersion:v1.28.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:newest-cni-867563 NodeName:newest-cni-867563 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota feature-gates:ServerSideApply=true] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.4"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[feature-gates:ServerSideApply=true leader-elect:false] Pairs:map[]}] FeatureArgs:map[]
NodeIP:192.168.61.4 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0911 12:27:16.466490 2261081 kubeadm.go:181] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.4
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "newest-cni-867563"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.4
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.4"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	    feature-gates: "ServerSideApply=true"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    feature-gates: "ServerSideApply=true"
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.42.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.42.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0911 12:27:16.466581 2261081 kubeadm.go:976] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime-endpoint=unix:///var/run/crio/crio.sock --feature-gates=ServerSideApply=true --hostname-override=newest-cni-867563 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.4
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.1 ClusterName:newest-cni-867563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:}
	I0911 12:27:16.466662 2261081 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.1
	I0911 12:27:16.478876 2261081 binaries.go:44] Found k8s binaries, skipping transfer
	I0911 12:27:16.479025 2261081 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0911 12:27:16.489893 2261081 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (412 bytes)
	I0911 12:27:16.510330 2261081 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0911 12:27:16.529289 2261081 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2222 bytes)
	I0911 12:27:16.550897 2261081 ssh_runner.go:195] Run: grep 192.168.61.4	control-plane.minikube.internal$ /etc/hosts
	I0911 12:27:16.556012 2261081 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.4	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0911 12:27:16.569784 2261081 certs.go:56] Setting up /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563 for IP: 192.168.61.4
	I0911 12:27:16.569831 2261081 certs.go:190] acquiring lock for shared ca certs: {Name:mk6a9ef95054da64bb5ea3255f0da59dbe502b89 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:27:16.570039 2261081 certs.go:199] skipping minikubeCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key
	I0911 12:27:16.570084 2261081 certs.go:199] skipping proxyClientCA CA generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key
	I0911 12:27:16.570261 2261081 certs.go:315] skipping minikube-user signed cert generation: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/client.key
	I0911 12:27:16.570295 2261081 certs.go:319] generating minikube signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/apiserver.key.f7630225
	I0911 12:27:16.570308 2261081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/apiserver.crt.f7630225 with IP's: [192.168.61.4 10.96.0.1 127.0.0.1 10.0.0.1]
	I0911 12:27:16.781182 2261081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/apiserver.crt.f7630225 ...
	I0911 12:27:16.781217 2261081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/apiserver.crt.f7630225: {Name:mk06e2f72db8e669ad483d2b20eee664f3bc33ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:27:16.781424 2261081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/apiserver.key.f7630225 ...
	I0911 12:27:16.781440 2261081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/apiserver.key.f7630225: {Name:mkbb8adf830b0d37628d0f1612fdc07cc2657855 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:27:16.781538 2261081 certs.go:337] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/apiserver.crt.f7630225 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/apiserver.crt
	I0911 12:27:16.781605 2261081 certs.go:341] copying /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/apiserver.key.f7630225 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/apiserver.key
	I0911 12:27:16.781657 2261081 certs.go:319] generating aggregator signed cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/proxy-client.key
	I0911 12:27:16.781679 2261081 crypto.go:68] Generating cert /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/proxy-client.crt with IP's: []
	I0911 12:27:16.924918 2261081 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/proxy-client.crt ...
	I0911 12:27:16.924956 2261081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/proxy-client.crt: {Name:mkd843fd834e373aa894228e06ded471c68d8909 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:27:16.959078 2261081 crypto.go:164] Writing key to /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/proxy-client.key ...
	I0911 12:27:16.959119 2261081 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/proxy-client.key: {Name:mk5406fc18d3ced74d55eb9c218639ba5cfd330a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0911 12:27:16.959389 2261081 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem (1338 bytes)
	W0911 12:27:16.959512 2261081 certs.go:433] ignoring /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471_empty.pem, impossibly tiny 0 bytes
	I0911 12:27:16.959533 2261081 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca-key.pem (1675 bytes)
	I0911 12:27:16.959570 2261081 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/ca.pem (1082 bytes)
	I0911 12:27:16.959614 2261081 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/cert.pem (1123 bytes)
	I0911 12:27:16.959648 2261081 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/certs/key.pem (1679 bytes)
	I0911 12:27:16.959710 2261081 certs.go:437] found cert: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem (1708 bytes)
	I0911 12:27:16.960476 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes)
	I0911 12:27:16.988800 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0911 12:27:17.017249 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0911 12:27:17.045469 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/newest-cni-867563/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0911 12:27:17.072765 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0911 12:27:17.099683 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0911 12:27:17.127404 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0911 12:27:17.154584 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0911 12:27:17.183452 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/ssl/certs/22224712.pem --> /usr/share/ca-certificates/22224712.pem (1708 bytes)
	I0911 12:27:17.211333 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0911 12:27:17.239157 2261081 ssh_runner.go:362] scp /home/jenkins/minikube-integration/17223-2215273/.minikube/certs/2222471.pem --> /usr/share/ca-certificates/2222471.pem (1338 bytes)
	I0911 12:27:17.266472 2261081 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0911 12:27:17.288521 2261081 ssh_runner.go:195] Run: openssl version
	I0911 12:27:17.295199 2261081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/2222471.pem && ln -fs /usr/share/ca-certificates/2222471.pem /etc/ssl/certs/2222471.pem"
	I0911 12:27:17.308042 2261081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/2222471.pem
	I0911 12:27:17.313945 2261081 certs.go:480] hashing: -rw-r--r-- 1 root root 1338 Sep 11 11:05 /usr/share/ca-certificates/2222471.pem
	I0911 12:27:17.314040 2261081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/2222471.pem
	I0911 12:27:17.320789 2261081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/2222471.pem /etc/ssl/certs/51391683.0"
	I0911 12:27:17.333670 2261081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/22224712.pem && ln -fs /usr/share/ca-certificates/22224712.pem /etc/ssl/certs/22224712.pem"
	I0911 12:27:17.347773 2261081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/22224712.pem
	I0911 12:27:17.355019 2261081 certs.go:480] hashing: -rw-r--r-- 1 root root 1708 Sep 11 11:05 /usr/share/ca-certificates/22224712.pem
	I0911 12:27:17.355119 2261081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/22224712.pem
	I0911 12:27:17.362003 2261081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/22224712.pem /etc/ssl/certs/3ec20f2e.0"
	I0911 12:27:17.375450 2261081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0911 12:27:17.387807 2261081 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:27:17.393007 2261081 certs.go:480] hashing: -rw-r--r-- 1 root root 1111 Sep 11 10:57 /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:27:17.393073 2261081 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0911 12:27:17.399745 2261081 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0911 12:27:17.413369 2261081 ssh_runner.go:195] Run: ls /var/lib/minikube/certs/etcd
	I0911 12:27:17.418654 2261081 certs.go:353] certs directory doesn't exist, likely first start: ls /var/lib/minikube/certs/etcd: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/certs/etcd': No such file or directory
	I0911 12:27:17.418719 2261081 kubeadm.go:404] StartCluster: {Name:newest-cni-867563 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.28.1 ClusterName:newest-cni-867563 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.61.4 Port:8443 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/miniku
be-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 12:27:17.418813 2261081 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0911 12:27:17.418864 2261081 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0911 12:27:17.454516 2261081 cri.go:89] found id: ""
	I0911 12:27:17.454609 2261081 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0911 12:27:17.466976 2261081 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0911 12:27:17.477956 2261081 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0911 12:27:17.488590 2261081 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0911 12:27:17.488653 2261081 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0911 12:27:17.916188 2261081 kubeadm.go:322] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	
	* 
	* ==> CRI-O <==
	* -- Journal begins at Mon 2023-09-11 12:08:33 UTC, ends at Mon 2023-09-11 12:27:27 UTC. --
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.016914447Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=79c8bb4a-a26f-42b1-b08c-5c858187bfc4 name=/runtime.v1.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.119851595Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1f2c3547-3e86-4d37-a33c-5e631456e318 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.119945750Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1f2c3547-3e86-4d37-a33c-5e631456e318 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.120317975Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1f2c3547-3e86-4d37-a33c-5e631456e318 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.160791022Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=2d66456c-42db-4ce9-b689-00139b3327f7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.160862954Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=2d66456c-42db-4ce9-b689-00139b3327f7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.161045884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=2d66456c-42db-4ce9-b689-00139b3327f7 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.202966690Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=df8bcea2-734b-44d5-bd98-d29c53f9c501 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.203037810Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=df8bcea2-734b-44d5-bd98-d29c53f9c501 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.203327145Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=df8bcea2-734b-44d5-bd98-d29c53f9c501 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.244297184Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=4a646750-da26-420a-b871-f6014473e379 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.244390048Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=4a646750-da26-420a-b871-f6014473e379 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.244691887Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=4a646750-da26-420a-b871-f6014473e379 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.290914441Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=6d4a432d-f9ac-4b50-ae79-e49dbd8f8667 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.291239524Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=6d4a432d-f9ac-4b50-ae79-e49dbd8f8667 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.291523189Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=6d4a432d-f9ac-4b50-ae79-e49dbd8f8667 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.339502991Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=81d3a3a7-cfbc-44a8-b376-0869f227a910 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.339604208Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=81d3a3a7-cfbc-44a8-b376-0869f227a910 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.339893141Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=81d3a3a7-cfbc-44a8-b376-0869f227a910 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.381247171Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=0579400d-3dc3-4b18-bb00-7609b97dcc00 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.381382613Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=0579400d-3dc3-4b18-bb00-7609b97dcc00 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.381590043Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=0579400d-3dc3-4b18-bb00-7609b97dcc00 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.418388757Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="go-grpc-middleware/chain.go:25" id=1549f1fa-34e0-4c40-88f7-1018be7fc682 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.418454619Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:58" id=1549f1fa-34e0-4c40-88f7-1018be7fc682 name=/runtime.v1alpha2.RuntimeService/ListContainers
	Sep 11 12:27:27 no-preload-352076 crio[710]: time="2023-09-11 12:27:27.418621075Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895,PodSandboxId:a10586f48a6b403df7bda4c454ee0e8b455e3117bad5bcb5095f91b22730f10f,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},},ImageRef:gcr.io/k8s-minikube/storage-provisioner@sha256:eaba9cb00c07766583bd527cc3d4dbf002c2587dcee4952d2ee56e8562346651,State:CONTAINER_RUNNING,CreatedAt:1694434480555175673,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c5d1acfb-fa11-4a73-9176-21aee3e2ab99,},Annotations:map[string]string{io.kubernetes.container.hash: 4c2537a,io.kubernetes.container.restartCount: 0,io.kuber
netes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935,PodSandboxId:ab9742ea8a542018385d989ed9bbe7db9dcacc97a1fcde06afc0e3613bdbaeb7,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},},ImageRef:registry.k8s.io/coredns/coredns@sha256:5e05ab4ed722431a3658447fdec1a76ca70a6c878d8d4f34a709b38d4d776fb3,State:CONTAINER_RUNNING,CreatedAt:1694434479734526067,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-6w2w7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fe585a8f-a92f-4497-b399-d759c995f9e6,},Annotations:map[string]string{io.kubernetes.container.hash: 39e5b388,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"pro
tocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0,PodSandboxId:7b12c788adaf9a70d8181c4c90adc0fb5a5ff5b12a3825e9fb2e60526d12ad3d,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-proxy@sha256:a41c5cfa524a29cab9116589129573093b04a5b9173565f754b8ef1e1718e811,State:CONTAINER_RUNNING,CreatedAt:1694434477500453870,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-f5w2x,io.kubernetes.pod.namespace: kube-system,io.kubernetes.po
d.uid: 03e8a2b5-aaf8-4fd7-920e-033a44729398,},Annotations:map[string]string{io.kubernetes.container.hash: 4213a1ec,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c,PodSandboxId:15a95c507ead008dc28a98bc2d717bc9fb065232662ed93c6344e206c08dd9a9,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-scheduler@sha256:63742d5a136fba4d4685c146939fa9ca6afb5e26913218fa3c1ea5dc28af0148,State:CONTAINER_RUNNING,CreatedAt:1694434454526896309,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid:
f7cea54bc5023a25cc6c8d99a5d8b950,},Annotations:map[string]string{io.kubernetes.container.hash: 61920a46,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb,PodSandboxId:8f7ed7ddc0b5c57a7a50d4f511f2f5f189a364a0f926409d1f21af6030e898bb,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},},ImageRef:registry.k8s.io/etcd@sha256:bcaca7dd21ae176571af772b793432ebbde025963a60f0596bbc6032987bbdec,State:CONTAINER_RUNNING,CreatedAt:1694434454410684828,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8700a49322597c8b3583eccc1568ff8e,},Annotations:map[
string]string{io.kubernetes.container.hash: ea28e9a3,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18,PodSandboxId:75302ad460aebfcfdd40f077b6b4573e4d60cfec2b250772d6cf5279147f2699,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-apiserver@sha256:c49646782be18ef2ee98f58bb9dc3f1a0abfb5182a1589378ae860f59dfaf751,State:CONTAINER_RUNNING,CreatedAt:1694434453861278425,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c08efb199081eefea7071b4f0ff8574c,},Annotations:map[string]stri
ng{io.kubernetes.container.hash: 73e435d6,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067,PodSandboxId:b33e71d3d0f20c8a7c526582efc0f870352c9363f5be00202eb844516500132d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac,Annotations:map[string]string{},},ImageRef:registry.k8s.io/kube-controller-manager@sha256:bacf4a202e8bbb17b0e3934cb59b6c07a6a84fa32ea2199d6afba0dd386c0bb5,State:CONTAINER_RUNNING,CreatedAt:1694434453644689718,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-352076,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e5c2adf841bc1ed23c1212ed6429e003,},An
notations:map[string]string{io.kubernetes.container.hash: d4f2c7a0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="go-grpc-middleware/chain.go:25" id=1549f1fa-34e0-4c40-88f7-1018be7fc682 name=/runtime.v1alpha2.RuntimeService/ListContainers
	
	* 
	* ==> container status <==
	* CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID
	0a0c88ff1a170       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   12 minutes ago      Running             storage-provisioner       0                   a10586f48a6b4
	14521a0d7dd6e       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   12 minutes ago      Running             coredns                   0                   ab9742ea8a542
	415dac0b82907       6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5   12 minutes ago      Running             kube-proxy                0                   7b12c788adaf9
	ffa489dcdfa40       b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a   13 minutes ago      Running             kube-scheduler            2                   15a95c507ead0
	262c730a5965c       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   13 minutes ago      Running             etcd                      2                   8f7ed7ddc0b5c
	286d8fe64e428       5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77   13 minutes ago      Running             kube-apiserver            2                   75302ad460aeb
	20d2f3a34c9c4       821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac   13 minutes ago      Running             kube-controller-manager   2                   b33e71d3d0f20
	
	* 
	* ==> coredns [14521a0d7dd6e6c987484932b6eb5d30d7e2bca58141037bb2f0a9dbd8dd8935] <==
	* [INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = efc9280483fcc4efbcb768f0bb0eae8d655d9a6698f75ec2e812845ddebe357ede76757b8e27e6a6ace121d67e74d46126ca7de2feaa2fdd891a0e8e676dd4cb
	[INFO] Reloading complete
	[INFO] 127.0.0.1:37861 - 3620 "HINFO IN 8702822923671551097.7223591626485362324. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.011145084s
	
	* 
	* ==> describe nodes <==
	* Name:               no-preload-352076
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-352076
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=58460de6978298fe1c37b30354468f3a287d03e9
	                    minikube.k8s.io/name=no-preload-352076
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2023_09_11T12_14_22_0700
	                    minikube.k8s.io/version=v1.31.2
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 11 Sep 2023 12:14:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-352076
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 11 Sep 2023 12:27:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 11 Sep 2023 12:24:55 +0000   Mon, 11 Sep 2023 12:14:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 11 Sep 2023 12:24:55 +0000   Mon, 11 Sep 2023 12:14:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 11 Sep 2023 12:24:55 +0000   Mon, 11 Sep 2023 12:14:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 11 Sep 2023 12:24:55 +0000   Mon, 11 Sep 2023 12:14:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.72.157
	  Hostname:    no-preload-352076
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17784752Ki
	  hugepages-2Mi:      0
	  memory:             2165900Ki
	  pods:               110
	System Info:
	  Machine ID:                 0122708b2c1a4702991090a6268bbc2f
	  System UUID:                0122708b-2c1a-4702-9910-90a6268bbc2f
	  Boot ID:                    658ac0ba-9db1-4043-b24a-5bbe17435b9e
	  Kernel Version:             5.10.57
	  OS Image:                   Buildroot 2021.02.12
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.1
	  Kubelet Version:            v1.28.1
	  Kube-Proxy Version:         v1.28.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-6w2w7                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     12m
	  kube-system                 etcd-no-preload-352076                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kube-apiserver-no-preload-352076             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-no-preload-352076    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-f5w2x                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-no-preload-352076             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 metrics-server-57f55c9bc5-r8mgg              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 12m                kube-proxy       
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x9 over 13m)  kubelet          Node no-preload-352076 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x7 over 13m)  kubelet          Node no-preload-352076 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-352076 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m                kubelet          Node no-preload-352076 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m                kubelet          Node no-preload-352076 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m                kubelet          Node no-preload-352076 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             13m                kubelet          Node no-preload-352076 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                13m                kubelet          Node no-preload-352076 status is now: NodeReady
	  Normal  RegisteredNode           12m                node-controller  Node no-preload-352076 event: Registered Node no-preload-352076 in Controller
	
	* 
	* ==> dmesg <==
	* [Sep11 12:08] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.102368] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +5.505583] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.952728] systemd-fstab-generator[114]: Ignoring "noauto" for root device
	[  +0.174470] systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling.
	[  +0.000003] systemd[1]: (This warning is only shown for the first unit using IP firewalling.)
	[  +2.583765] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000018] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.658194] systemd-fstab-generator[637]: Ignoring "noauto" for root device
	[  +0.134197] systemd-fstab-generator[648]: Ignoring "noauto" for root device
	[  +0.165519] systemd-fstab-generator[661]: Ignoring "noauto" for root device
	[  +0.126876] systemd-fstab-generator[672]: Ignoring "noauto" for root device
	[  +0.263940] systemd-fstab-generator[696]: Ignoring "noauto" for root device
	[Sep11 12:09] systemd-fstab-generator[1213]: Ignoring "noauto" for root device
	[ +19.715922] kauditd_printk_skb: 29 callbacks suppressed
	[Sep11 12:14] systemd-fstab-generator[3809]: Ignoring "noauto" for root device
	[ +10.831527] systemd-fstab-generator[4138]: Ignoring "noauto" for root device
	
	* 
	* ==> etcd [262c730a5965c6ab42b89cfdcc46abb853fe4ac1a00b30c0d1a2aec2dfc1f8eb] <==
	* {"level":"info","ts":"2023-09-11T12:14:16.433752Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2023-09-11T12:14:16.439639Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"e97ba2b9037c192e","initial-advertise-peer-urls":["https://192.168.72.157:2380"],"listen-peer-urls":["https://192.168.72.157:2380"],"advertise-client-urls":["https://192.168.72.157:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.72.157:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2023-09-11T12:14:16.439746Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2023-09-11T12:14:16.751548Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e is starting a new election at term 1"}
	{"level":"info","ts":"2023-09-11T12:14:16.751615Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e became pre-candidate at term 1"}
	{"level":"info","ts":"2023-09-11T12:14:16.751657Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e received MsgPreVoteResp from e97ba2b9037c192e at term 1"}
	{"level":"info","ts":"2023-09-11T12:14:16.751672Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e became candidate at term 2"}
	{"level":"info","ts":"2023-09-11T12:14:16.751678Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e received MsgVoteResp from e97ba2b9037c192e at term 2"}
	{"level":"info","ts":"2023-09-11T12:14:16.751688Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"e97ba2b9037c192e became leader at term 2"}
	{"level":"info","ts":"2023-09-11T12:14:16.751707Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: e97ba2b9037c192e elected leader e97ba2b9037c192e at term 2"}
	{"level":"info","ts":"2023-09-11T12:14:16.753634Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"e97ba2b9037c192e","local-member-attributes":"{Name:no-preload-352076 ClientURLs:[https://192.168.72.157:2379]}","request-path":"/0/members/e97ba2b9037c192e/attributes","cluster-id":"2d4154f8677556f0","publish-timeout":"7s"}
	{"level":"info","ts":"2023-09-11T12:14:16.753876Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:14:16.75412Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T12:14:16.755358Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2023-09-11T12:14:16.755429Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2023-09-11T12:14:16.755485Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"2d4154f8677556f0","local-member-id":"e97ba2b9037c192e","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:14:16.755533Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2023-09-11T12:14:16.755755Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:14:16.755817Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2023-09-11T12:14:16.755493Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2023-09-11T12:14:16.762518Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.72.157:2379"}
	{"level":"info","ts":"2023-09-11T12:24:16.794956Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":719}
	{"level":"info","ts":"2023-09-11T12:24:16.798906Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":719,"took":"3.308668ms","hash":3501066630}
	{"level":"info","ts":"2023-09-11T12:24:16.798993Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3501066630,"revision":719,"compact-revision":-1}
	{"level":"info","ts":"2023-09-11T12:27:18.905451Z","caller":"traceutil/trace.go:171","msg":"trace[1955480146] transaction","detail":"{read_only:false; response_revision:1111; number_of_response:1; }","duration":"154.754988ms","start":"2023-09-11T12:27:18.750643Z","end":"2023-09-11T12:27:18.905398Z","steps":["trace[1955480146] 'process raft request'  (duration: 154.612015ms)"],"step_count":1}
	
	* 
	* ==> kernel <==
	*  12:27:27 up 19 min,  0 users,  load average: 0.15, 0.21, 0.20
	Linux no-preload-352076 5.10.57 #1 SMP Thu Aug 24 14:58:46 UTC 2023 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2021.02.12"
	
	* 
	* ==> kube-apiserver [286d8fe64e4289faf3927b529cf58b4de88eab890615af705c3ebe40dff8bf18] <==
	* , Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:24:19.699958       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 12:24:19.700162       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:24:19.700215       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:24:19.701487       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:25:18.604971       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.109.239.255:443: connect: connection refused
	I0911 12:25:18.605369       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:25:19.700782       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:25:19.700943       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:25:19.700956       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 12:25:19.702203       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:25:19.702252       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:25:19.702270       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0911 12:26:18.605310       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.109.239.255:443: connect: connection refused
	I0911 12:26:18.605371       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0911 12:27:18.604925       1 handler_discovery.go:337] DiscoveryManager: Failed to download discovery for kube-system/metrics-server:443: 503 error trying to reach service: dial tcp 10.109.239.255:443: connect: connection refused
	I0911 12:27:18.605344       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0911 12:27:19.701610       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:27:19.701872       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0911 12:27:19.702001       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0911 12:27:19.702665       1 handler_proxy.go:93] no RequestInfo found in the context
	E0911 12:27:19.702724       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0911 12:27:19.704011       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	* 
	* ==> kube-controller-manager [20d2f3a34c9c4bcc3e5884ffd33193f5c2d2951620eaebfb2808015f535e5067] <==
	* I0911 12:21:36.570121       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:22:06.035907       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:22:06.580872       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:22:36.044236       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:22:36.590477       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:23:06.053187       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:23:06.601013       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:23:36.060535       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:23:36.610952       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:24:06.067340       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:24:06.622812       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:24:36.076042       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:24:36.633259       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:25:06.087145       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:25:06.644711       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:25:36.095117       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:25:36.656702       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0911 12:25:37.030662       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="452.835µs"
	I0911 12:25:51.034345       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="226.985µs"
	E0911 12:26:06.102217       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:26:06.678519       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:26:36.109957       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:26:36.692778       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0911 12:27:06.116399       1 resource_quota_controller.go:441] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0911 12:27:06.703761       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	* 
	* ==> kube-proxy [415dac0b829079646282e58dfdcd09b596b4b8817767e15a96080a3432f9a6f0] <==
	* I0911 12:14:39.009971       1 server_others.go:69] "Using iptables proxy"
	I0911 12:14:39.203688       1 node.go:141] Successfully retrieved node IP: 192.168.72.157
	I0911 12:14:39.491235       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0911 12:14:39.491324       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0911 12:14:39.497310       1 server_others.go:152] "Using iptables Proxier"
	I0911 12:14:39.497663       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0911 12:14:39.497978       1 server.go:846] "Version info" version="v1.28.1"
	I0911 12:14:39.498410       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0911 12:14:39.499603       1 config.go:188] "Starting service config controller"
	I0911 12:14:39.499762       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0911 12:14:39.499906       1 config.go:97] "Starting endpoint slice config controller"
	I0911 12:14:39.499987       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0911 12:14:39.503433       1 config.go:315] "Starting node config controller"
	I0911 12:14:39.503578       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0911 12:14:39.601660       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0911 12:14:39.654332       1 shared_informer.go:318] Caches are synced for service config
	I0911 12:14:39.654364       1 shared_informer.go:318] Caches are synced for node config
	
	* 
	* ==> kube-scheduler [ffa489dcdfa40d6372d8ea348af841efd6088d8922e8da9a9f37f990cd438d9c] <==
	* W0911 12:14:18.850392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 12:14:18.850422       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0911 12:14:18.849994       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 12:14:18.850473       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 12:14:19.711609       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0911 12:14:19.711736       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0911 12:14:19.782452       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0911 12:14:19.782548       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0911 12:14:19.789783       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0911 12:14:19.789899       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0911 12:14:19.855991       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0911 12:14:19.856120       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0911 12:14:19.947335       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0911 12:14:19.947391       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0911 12:14:19.964462       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0911 12:14:19.964529       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0911 12:14:20.045503       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0911 12:14:20.045578       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0911 12:14:20.093463       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0911 12:14:20.093561       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0911 12:14:20.208016       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0911 12:14:20.208223       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0911 12:14:20.242961       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0911 12:14:20.243169       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	I0911 12:14:21.915567       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	* 
	* ==> kubelet <==
	* -- Journal begins at Mon 2023-09-11 12:08:33 UTC, ends at Mon 2023-09-11 12:27:28 UTC. --
	Sep 11 12:25:22 no-preload-352076 kubelet[4146]: E0911 12:25:22.032988    4146 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 11 12:25:22 no-preload-352076 kubelet[4146]: E0911 12:25:22.033227    4146 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Sep 11 12:25:22 no-preload-352076 kubelet[4146]: E0911 12:25:22.033554    4146 kuberuntime_manager.go:1209] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-wn7xq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-r8mgg_kube-system(a54edaa0-b800-48f3-99bc-7d38adb834d0): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Sep 11 12:25:22 no-preload-352076 kubelet[4146]: E0911 12:25:22.033600    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:25:23 no-preload-352076 kubelet[4146]: E0911 12:25:23.154869    4146 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:25:23 no-preload-352076 kubelet[4146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:25:23 no-preload-352076 kubelet[4146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:25:23 no-preload-352076 kubelet[4146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:25:37 no-preload-352076 kubelet[4146]: E0911 12:25:37.010729    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:25:51 no-preload-352076 kubelet[4146]: E0911 12:25:51.014255    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:26:06 no-preload-352076 kubelet[4146]: E0911 12:26:06.011726    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:26:21 no-preload-352076 kubelet[4146]: E0911 12:26:21.012224    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:26:23 no-preload-352076 kubelet[4146]: E0911 12:26:23.156642    4146 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:26:23 no-preload-352076 kubelet[4146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:26:23 no-preload-352076 kubelet[4146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:26:23 no-preload-352076 kubelet[4146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:26:32 no-preload-352076 kubelet[4146]: E0911 12:26:32.011774    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:26:47 no-preload-352076 kubelet[4146]: E0911 12:26:47.011758    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:27:01 no-preload-352076 kubelet[4146]: E0911 12:27:01.012005    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:27:13 no-preload-352076 kubelet[4146]: E0911 12:27:13.013960    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	Sep 11 12:27:23 no-preload-352076 kubelet[4146]: E0911 12:27:23.154857    4146 iptables.go:575] "Could not set up iptables canary" err=<
	Sep 11 12:27:23 no-preload-352076 kubelet[4146]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: ip6tables v1.8.6 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Sep 11 12:27:23 no-preload-352076 kubelet[4146]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Sep 11 12:27:23 no-preload-352076 kubelet[4146]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Sep 11 12:27:25 no-preload-352076 kubelet[4146]: E0911 12:27:25.011552    4146 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-r8mgg" podUID="a54edaa0-b800-48f3-99bc-7d38adb834d0"
	
	* 
	* ==> storage-provisioner [0a0c88ff1a1705fec95b0f00361ae6766a1d35ade507ded0db7324ea9bb97895] <==
	* I0911 12:14:40.681805       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0911 12:14:40.696183       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0911 12:14:40.696379       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0911 12:14:40.708716       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0911 12:14:40.710557       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-352076_62ea1927-0c6d-4568-abab-cc82d93b0ac1!
	I0911 12:14:40.709035       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"a9c563c4-5421-4c9d-90e2-aa74b649c30e", APIVersion:"v1", ResourceVersion:"457", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-352076_62ea1927-0c6d-4568-abab-cc82d93b0ac1 became leader
	I0911 12:14:40.811537       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-352076_62ea1927-0c6d-4568-abab-cc82d93b0ac1!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-352076 -n no-preload-352076
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-352076 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-r8mgg
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-352076 describe pod metrics-server-57f55c9bc5-r8mgg
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-352076 describe pod metrics-server-57f55c9bc5-r8mgg: exit status 1 (88.245832ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-r8mgg" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-352076 describe pod metrics-server-57f55c9bc5-r8mgg: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (221.92s)

                                                
                                    

Test pass (225/288)

Order passed test Duration
3 TestDownloadOnly/v1.16.0/json-events 9.21
4 TestDownloadOnly/v1.16.0/preload-exists 0
8 TestDownloadOnly/v1.16.0/LogsDuration 0.07
10 TestDownloadOnly/v1.28.1/json-events 4.48
11 TestDownloadOnly/v1.28.1/preload-exists 0
15 TestDownloadOnly/v1.28.1/LogsDuration 0.07
16 TestDownloadOnly/DeleteAll 0.14
17 TestDownloadOnly/DeleteAlwaysSucceeds 0.13
19 TestBinaryMirror 0.58
20 TestOffline 138.3
22 TestAddons/Setup 145.51
24 TestAddons/parallel/Registry 15.98
27 TestAddons/parallel/MetricsServer 7.16
28 TestAddons/parallel/HelmTiller 11.39
30 TestAddons/parallel/CSI 62.86
31 TestAddons/parallel/Headlamp 15.7
32 TestAddons/parallel/CloudSpanner 6.53
35 TestAddons/serial/GCPAuth/Namespaces 0.13
37 TestCertOptions 118.63
38 TestCertExpiration 298.14
40 TestForceSystemdFlag 85.86
41 TestForceSystemdEnv 115.68
43 TestKVMDriverInstallOrUpdate 1.65
47 TestErrorSpam/setup 48.41
48 TestErrorSpam/start 0.37
49 TestErrorSpam/status 0.8
50 TestErrorSpam/pause 1.57
51 TestErrorSpam/unpause 1.8
52 TestErrorSpam/stop 2.25
55 TestFunctional/serial/CopySyncFile 0
56 TestFunctional/serial/StartWithProxy 65.03
57 TestFunctional/serial/AuditLog 0
58 TestFunctional/serial/SoftStart 52.99
59 TestFunctional/serial/KubeContext 0.04
60 TestFunctional/serial/KubectlGetPods 0.08
63 TestFunctional/serial/CacheCmd/cache/add_remote 3.22
64 TestFunctional/serial/CacheCmd/cache/add_local 1.08
65 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
66 TestFunctional/serial/CacheCmd/cache/list 0.05
67 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.22
68 TestFunctional/serial/CacheCmd/cache/cache_reload 1.62
69 TestFunctional/serial/CacheCmd/cache/delete 0.09
70 TestFunctional/serial/MinikubeKubectlCmd 0.11
71 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.11
72 TestFunctional/serial/ExtraConfig 37.5
73 TestFunctional/serial/ComponentHealth 0.08
74 TestFunctional/serial/LogsCmd 1.48
75 TestFunctional/serial/LogsFileCmd 1.51
76 TestFunctional/serial/InvalidService 4.33
78 TestFunctional/parallel/ConfigCmd 0.36
79 TestFunctional/parallel/DashboardCmd 44.81
80 TestFunctional/parallel/DryRun 0.53
81 TestFunctional/parallel/InternationalLanguage 0.17
82 TestFunctional/parallel/StatusCmd 1.12
86 TestFunctional/parallel/ServiceCmdConnect 12.63
87 TestFunctional/parallel/AddonsCmd 0.15
88 TestFunctional/parallel/PersistentVolumeClaim 55.76
90 TestFunctional/parallel/SSHCmd 0.49
91 TestFunctional/parallel/CpCmd 1.06
92 TestFunctional/parallel/MySQL 34.16
93 TestFunctional/parallel/FileSync 0.28
94 TestFunctional/parallel/CertSync 1.58
98 TestFunctional/parallel/NodeLabels 0.07
100 TestFunctional/parallel/NonActiveRuntimeDisabled 0.49
102 TestFunctional/parallel/License 0.18
103 TestFunctional/parallel/Version/short 0.05
104 TestFunctional/parallel/Version/components 0.95
105 TestFunctional/parallel/UpdateContextCmd/no_changes 0.1
106 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
107 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.1
108 TestFunctional/parallel/ServiceCmd/DeployApp 12.29
118 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
119 TestFunctional/parallel/ProfileCmd/profile_list 0.35
120 TestFunctional/parallel/ProfileCmd/profile_json_output 0.41
121 TestFunctional/parallel/MountCmd/any-port 9.08
122 TestFunctional/parallel/ServiceCmd/List 0.47
123 TestFunctional/parallel/MountCmd/specific-port 2.01
124 TestFunctional/parallel/ServiceCmd/JSONOutput 0.54
125 TestFunctional/parallel/ServiceCmd/HTTPS 0.69
126 TestFunctional/parallel/ServiceCmd/Format 0.51
127 TestFunctional/parallel/ServiceCmd/URL 0.57
128 TestFunctional/parallel/MountCmd/VerifyCleanup 1.92
129 TestFunctional/parallel/ImageCommands/ImageListShort 0.25
130 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
131 TestFunctional/parallel/ImageCommands/ImageListJson 0.24
132 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
133 TestFunctional/parallel/ImageCommands/ImageBuild 2.66
134 TestFunctional/parallel/ImageCommands/Setup 1.18
135 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 5.95
136 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 5.02
137 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 11.72
138 TestFunctional/parallel/ImageCommands/ImageSaveToFile 2.17
139 TestFunctional/parallel/ImageCommands/ImageRemove 0.83
140 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 5.82
141 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.4
142 TestFunctional/delete_addon-resizer_images 0.07
143 TestFunctional/delete_my-image_image 0.02
144 TestFunctional/delete_minikube_cached_images 0.02
148 TestIngressAddonLegacy/StartLegacyK8sCluster 78.38
150 TestIngressAddonLegacy/serial/ValidateIngressAddonActivation 14.56
151 TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation 0.61
155 TestJSONOutput/start/Command 64.39
156 TestJSONOutput/start/Audit 0
158 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
159 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
161 TestJSONOutput/pause/Command 0.73
162 TestJSONOutput/pause/Audit 0
164 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
165 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
167 TestJSONOutput/unpause/Command 0.63
168 TestJSONOutput/unpause/Audit 0
170 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
171 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
173 TestJSONOutput/stop/Command 7.11
174 TestJSONOutput/stop/Audit 0
176 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
177 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
178 TestErrorJSONOutput 0.21
183 TestMainNoArgs 0.05
184 TestMinikubeProfile 100.9
187 TestMountStart/serial/StartWithMountFirst 28.65
188 TestMountStart/serial/VerifyMountFirst 0.43
189 TestMountStart/serial/StartWithMountSecond 30.91
190 TestMountStart/serial/VerifyMountSecond 0.45
191 TestMountStart/serial/DeleteFirst 0.9
192 TestMountStart/serial/VerifyMountPostDelete 0.41
193 TestMountStart/serial/Stop 1.17
194 TestMountStart/serial/RestartStopped 26.22
195 TestMountStart/serial/VerifyMountPostStop 0.39
198 TestMultiNode/serial/FreshStart2Nodes 111.42
199 TestMultiNode/serial/DeployApp2Nodes 6.04
201 TestMultiNode/serial/AddNode 41.46
202 TestMultiNode/serial/ProfileList 0.22
203 TestMultiNode/serial/CopyFile 7.57
204 TestMultiNode/serial/StopNode 2.99
205 TestMultiNode/serial/StartAfterStop 32.31
207 TestMultiNode/serial/DeleteNode 1.82
209 TestMultiNode/serial/RestartMultiNode 533.47
210 TestMultiNode/serial/ValidateNameConflict 51.83
217 TestScheduledStopUnix 125.14
223 TestKubernetesUpgrade 158.63
225 TestStoppedBinaryUpgrade/Setup 0.35
235 TestPause/serial/Start 105.88
237 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
238 TestNoKubernetes/serial/StartWithK8s 63.55
246 TestNetworkPlugins/group/false 3.64
250 TestNoKubernetes/serial/StartWithStopK8s 15.47
251 TestNoKubernetes/serial/Start 29.19
253 TestNoKubernetes/serial/VerifyK8sNotRunning 0.22
254 TestNoKubernetes/serial/ProfileList 1.03
255 TestNoKubernetes/serial/Stop 1.28
256 TestNoKubernetes/serial/StartNoArgs 68.94
257 TestStoppedBinaryUpgrade/MinikubeLogs 0.46
258 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.21
260 TestStartStop/group/old-k8s-version/serial/FirstStart 212.98
262 TestStartStop/group/no-preload/serial/FirstStart 143.77
264 TestStartStop/group/embed-certs/serial/FirstStart 96.79
265 TestStartStop/group/no-preload/serial/DeployApp 9.46
266 TestStartStop/group/embed-certs/serial/DeployApp 9.46
267 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.28
269 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.25
271 TestStartStop/group/old-k8s-version/serial/DeployApp 8.46
272 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.93
275 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 60.02
276 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.5
278 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.14
281 TestStartStop/group/no-preload/serial/SecondStart 730.09
283 TestStartStop/group/embed-certs/serial/SecondStart 623.56
284 TestStartStop/group/old-k8s-version/serial/SecondStart 347.86
286 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 494.02
296 TestStartStop/group/newest-cni/serial/FirstStart 62.65
297 TestNetworkPlugins/group/auto/Start 71.3
298 TestStartStop/group/newest-cni/serial/DeployApp 0
299 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.75
300 TestStartStop/group/newest-cni/serial/Stop 11.12
301 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.2
302 TestStartStop/group/newest-cni/serial/SecondStart 59.55
303 TestNetworkPlugins/group/kindnet/Start 79.22
304 TestNetworkPlugins/group/auto/KubeletFlags 0.22
305 TestNetworkPlugins/group/auto/NetCatPod 11.35
306 TestNetworkPlugins/group/auto/DNS 0.19
307 TestNetworkPlugins/group/auto/Localhost 0.17
308 TestNetworkPlugins/group/auto/HairPin 0.18
309 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
310 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
311 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.31
312 TestStartStop/group/newest-cni/serial/Pause 4.09
313 TestNetworkPlugins/group/calico/Start 100.56
314 TestNetworkPlugins/group/custom-flannel/Start 114.88
315 TestNetworkPlugins/group/kindnet/ControllerPod 5.03
316 TestNetworkPlugins/group/kindnet/KubeletFlags 0.27
317 TestNetworkPlugins/group/kindnet/NetCatPod 14.45
318 TestNetworkPlugins/group/kindnet/DNS 0.21
319 TestNetworkPlugins/group/kindnet/Localhost 0.21
320 TestNetworkPlugins/group/kindnet/HairPin 0.18
321 TestNetworkPlugins/group/enable-default-cni/Start 102.76
322 TestNetworkPlugins/group/calico/ControllerPod 5.03
323 TestNetworkPlugins/group/calico/KubeletFlags 0.24
324 TestNetworkPlugins/group/calico/NetCatPod 14.47
325 TestNetworkPlugins/group/calico/DNS 0.26
326 TestNetworkPlugins/group/calico/Localhost 0.23
327 TestNetworkPlugins/group/calico/HairPin 0.23
328 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.23
329 TestNetworkPlugins/group/custom-flannel/NetCatPod 14.4
330 TestNetworkPlugins/group/flannel/Start 83.04
331 TestNetworkPlugins/group/custom-flannel/DNS 0.25
332 TestNetworkPlugins/group/custom-flannel/Localhost 0.19
333 TestNetworkPlugins/group/custom-flannel/HairPin 0.23
334 TestNetworkPlugins/group/bridge/Start 122.5
335 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.23
336 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.34
337 TestNetworkPlugins/group/enable-default-cni/DNS 0.23
338 TestNetworkPlugins/group/enable-default-cni/Localhost 0.23
339 TestNetworkPlugins/group/enable-default-cni/HairPin 0.2
340 TestNetworkPlugins/group/flannel/ControllerPod 5.03
341 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
342 TestNetworkPlugins/group/flannel/NetCatPod 11.34
343 TestNetworkPlugins/group/flannel/DNS 0.18
344 TestNetworkPlugins/group/flannel/Localhost 0.16
345 TestNetworkPlugins/group/flannel/HairPin 0.16
346 TestNetworkPlugins/group/bridge/KubeletFlags 0.21
347 TestNetworkPlugins/group/bridge/NetCatPod 12.29
348 TestNetworkPlugins/group/bridge/DNS 0.17
349 TestNetworkPlugins/group/bridge/Localhost 0.15
350 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.16.0/json-events (9.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-461050 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-461050 --force --alsologtostderr --kubernetes-version=v1.16.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (9.211189597s)
--- PASS: TestDownloadOnly/v1.16.0/json-events (9.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/preload-exists
--- PASS: TestDownloadOnly/v1.16.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-461050
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-461050: exit status 85 (64.884524ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-461050 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC |          |
	|         | -p download-only-461050        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 10:56:34
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 10:56:34.768675 2222483 out.go:296] Setting OutFile to fd 1 ...
	I0911 10:56:34.768831 2222483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 10:56:34.768848 2222483 out.go:309] Setting ErrFile to fd 2...
	I0911 10:56:34.768856 2222483 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 10:56:34.769064 2222483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	W0911 10:56:34.769207 2222483 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17223-2215273/.minikube/config/config.json: open /home/jenkins/minikube-integration/17223-2215273/.minikube/config/config.json: no such file or directory
	I0911 10:56:34.769842 2222483 out.go:303] Setting JSON to true
	I0911 10:56:34.770799 2222483 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":232746,"bootTime":1694197049,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 10:56:34.770879 2222483 start.go:138] virtualization: kvm guest
	I0911 10:56:34.773867 2222483 out.go:97] [download-only-461050] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 10:56:34.775659 2222483 out.go:169] MINIKUBE_LOCATION=17223
	W0911 10:56:34.774018 2222483 preload.go:295] Failed to list preload files: open /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball: no such file or directory
	I0911 10:56:34.774112 2222483 notify.go:220] Checking for updates...
	I0911 10:56:34.778862 2222483 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 10:56:34.780504 2222483 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 10:56:34.781946 2222483 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 10:56:34.783424 2222483 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0911 10:56:34.786610 2222483 out.go:272] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0911 10:56:34.786912 2222483 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 10:56:34.824802 2222483 out.go:97] Using the kvm2 driver based on user configuration
	I0911 10:56:34.824852 2222483 start.go:298] selected driver: kvm2
	I0911 10:56:34.824864 2222483 start.go:902] validating driver "kvm2" against <nil>
	I0911 10:56:34.825254 2222483 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 10:56:34.825345 2222483 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/17223-2215273/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0911 10:56:34.841383 2222483 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.31.2
	I0911 10:56:34.841436 2222483 start_flags.go:307] no existing cluster config was found, will generate one from the flags 
	I0911 10:56:34.841954 2222483 start_flags.go:384] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0911 10:56:34.842106 2222483 start_flags.go:904] Wait components to verify : map[apiserver:true system_pods:true]
	I0911 10:56:34.842139 2222483 cni.go:84] Creating CNI manager for ""
	I0911 10:56:34.842149 2222483 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0911 10:56:34.842156 2222483 start_flags.go:316] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0911 10:56:34.842162 2222483 start_flags.go:321] config:
	{Name:download-only-461050 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:6000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.16.0 ClusterName:download-only-461050 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRunt
ime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 10:56:34.842392 2222483 iso.go:125] acquiring lock: {Name:mkcbce32848b9c80e3ebc37eea3dfda2dff4509a Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0911 10:56:34.844306 2222483 out.go:97] Downloading VM boot image ...
	I0911 10:56:34.844347 2222483 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso.sha256 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/iso/amd64/minikube-v1.31.0-1692872107-17120-amd64.iso
	I0911 10:56:37.573606 2222483 out.go:97] Starting control plane node download-only-461050 in cluster download-only-461050
	I0911 10:56:37.573656 2222483 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0911 10:56:37.593509 2222483 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	I0911 10:56:37.593546 2222483 cache.go:57] Caching tarball of preloaded images
	I0911 10:56:37.593757 2222483 preload.go:132] Checking if preload exists for k8s version v1.16.0 and runtime crio
	I0911 10:56:37.596064 2222483 out.go:97] Downloading Kubernetes v1.16.0 preload ...
	I0911 10:56:37.596103 2222483 preload.go:238] getting checksum for preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4 ...
	I0911 10:56:37.622476 2222483 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.16.0/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:432b600409d778ea7a21214e83948570 -> /home/jenkins/minikube-integration/17223-2215273/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.16.0-cri-o-overlay-amd64.tar.lz4
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-461050"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.16.0/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/json-events (4.48s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/json-events
aaa_download_only_test.go:69: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-461050 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:69: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-461050 --force --alsologtostderr --kubernetes-version=v1.28.1 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (4.478411512s)
--- PASS: TestDownloadOnly/v1.28.1/json-events (4.48s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/preload-exists
--- PASS: TestDownloadOnly/v1.28.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/LogsDuration (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/LogsDuration
aaa_download_only_test.go:169: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-461050
aaa_download_only_test.go:169: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-461050: exit status 85 (65.985035ms)

                                                
                                                
-- stdout --
	* 
	* ==> Audit <==
	* |---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-461050 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC |          |
	|         | -p download-only-461050        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.16.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	| start   | -o=json --download-only        | download-only-461050 | jenkins | v1.31.2 | 11 Sep 23 10:56 UTC |          |
	|         | -p download-only-461050        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.28.1   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	* 
	* ==> Last Start <==
	* Log file created at: 2023/09/11 10:56:44
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.20.7 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0911 10:56:44.047032 2222529 out.go:296] Setting OutFile to fd 1 ...
	I0911 10:56:44.047153 2222529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 10:56:44.047161 2222529 out.go:309] Setting ErrFile to fd 2...
	I0911 10:56:44.047166 2222529 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 10:56:44.047385 2222529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	W0911 10:56:44.047508 2222529 root.go:314] Error reading config file at /home/jenkins/minikube-integration/17223-2215273/.minikube/config/config.json: open /home/jenkins/minikube-integration/17223-2215273/.minikube/config/config.json: no such file or directory
	I0911 10:56:44.047930 2222529 out.go:303] Setting JSON to true
	I0911 10:56:44.048856 2222529 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":232755,"bootTime":1694197049,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 10:56:44.048927 2222529 start.go:138] virtualization: kvm guest
	I0911 10:56:44.051332 2222529 out.go:97] [download-only-461050] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 10:56:44.053460 2222529 out.go:169] MINIKUBE_LOCATION=17223
	I0911 10:56:44.051577 2222529 notify.go:220] Checking for updates...
	I0911 10:56:44.057165 2222529 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 10:56:44.059206 2222529 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 10:56:44.061356 2222529 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 10:56:44.063388 2222529 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	* 
	* The control plane node "" does not exist.
	  To start a cluster, run: "minikube start -p download-only-461050"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:170: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.1/LogsDuration (0.07s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAll (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAll
aaa_download_only_test.go:187: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/DeleteAll (0.14s)

                                                
                                    
x
+
TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/DeleteAlwaysSucceeds
aaa_download_only_test.go:199: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-461050
--- PASS: TestDownloadOnly/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:304: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-417783 --alsologtostderr --binary-mirror http://127.0.0.1:34313 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-417783" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-417783
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (138.3s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-549113 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-549113 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (2m17.227435504s)
helpers_test.go:175: Cleaning up "offline-crio-549113" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-549113
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-549113: (1.071524998s)
--- PASS: TestOffline (138.30s)

                                                
                                    
x
+
TestAddons/Setup (145.51s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:88: (dbg) Run:  out/minikube-linux-amd64 start -p addons-554886 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:88: (dbg) Done: out/minikube-linux-amd64 start -p addons-554886 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m25.510190537s)
--- PASS: TestAddons/Setup (145.51s)

                                                
                                    
x
+
TestAddons/parallel/Registry (15.98s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:306: registry stabilized in 28.474877ms
addons_test.go:308: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-t6754" [8531b6ac-003f-4a6d-aab4-67819497ab11] Running
addons_test.go:308: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.368595794s
addons_test.go:311: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-lmsgk" [c3d6d669-7454-4529-b9ac-06abb4face91] Running
addons_test.go:311: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.03352074s
addons_test.go:316: (dbg) Run:  kubectl --context addons-554886 delete po -l run=registry-test --now
addons_test.go:321: (dbg) Run:  kubectl --context addons-554886 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:321: (dbg) Done: kubectl --context addons-554886 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.397859369s)
addons_test.go:335: (dbg) Run:  out/minikube-linux-amd64 -p addons-554886 ip
2023/09/11 10:59:30 [DEBUG] GET http://192.168.39.217:5000
addons_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p addons-554886 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (15.98s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (7.16s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:383: metrics-server stabilized in 28.654965ms
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7c66d45ddc-7krqz" [68915a10-f10d-4296-8a14-8c21f7f71a42] Running
addons_test.go:385: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.369902435s
addons_test.go:391: (dbg) Run:  kubectl --context addons-554886 top pods -n kube-system
addons_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p addons-554886 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p addons-554886 addons disable metrics-server --alsologtostderr -v=1: (1.677799102s)
--- PASS: TestAddons/parallel/MetricsServer (7.16s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (11.39s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:432: tiller-deploy stabilized in 100.241292ms
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-dtz9n" [871f81ec-dd78-4aa4-89e9-5b99419aa8d5] Running
addons_test.go:434: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.091957899s
addons_test.go:449: (dbg) Run:  kubectl --context addons-554886 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:449: (dbg) Done: kubectl --context addons-554886 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.352027669s)
addons_test.go:454: kubectl --context addons-554886 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: unable to upgrade connection: container helm-test not found in pod helm-test_kube-system
addons_test.go:466: (dbg) Run:  out/minikube-linux-amd64 -p addons-554886 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (11.39s)

                                                
                                    
x
+
TestAddons/parallel/CSI (62.86s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:537: csi-hostpath-driver pods stabilized in 9.425172ms
addons_test.go:540: (dbg) Run:  kubectl --context addons-554886 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:540: (dbg) Done: kubectl --context addons-554886 create -f testdata/csi-hostpath-driver/pvc.yaml: (1.058993438s)
addons_test.go:545: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:550: (dbg) Run:  kubectl --context addons-554886 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a947b917-6a49-4fee-abdd-7b72c2ad8519] Pending
helpers_test.go:344: "task-pv-pod" [a947b917-6a49-4fee-abdd-7b72c2ad8519] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a947b917-6a49-4fee-abdd-7b72c2ad8519] Running
addons_test.go:555: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 13.027135868s
addons_test.go:560: (dbg) Run:  kubectl --context addons-554886 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-554886 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-554886 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-554886 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:570: (dbg) Run:  kubectl --context addons-554886 delete pod task-pv-pod
addons_test.go:576: (dbg) Run:  kubectl --context addons-554886 delete pvc hpvc
addons_test.go:582: (dbg) Run:  kubectl --context addons-554886 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:587: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-554886 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:592: (dbg) Run:  kubectl --context addons-554886 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:597: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6c31e7bf-2e52-4afd-88d9-3d2a22098f66] Pending
helpers_test.go:344: "task-pv-pod-restore" [6c31e7bf-2e52-4afd-88d9-3d2a22098f66] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6c31e7bf-2e52-4afd-88d9-3d2a22098f66] Running
addons_test.go:597: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.022735619s
addons_test.go:602: (dbg) Run:  kubectl --context addons-554886 delete pod task-pv-pod-restore
addons_test.go:602: (dbg) Done: kubectl --context addons-554886 delete pod task-pv-pod-restore: (1.287425552s)
addons_test.go:606: (dbg) Run:  kubectl --context addons-554886 delete pvc hpvc-restore
addons_test.go:610: (dbg) Run:  kubectl --context addons-554886 delete volumesnapshot new-snapshot-demo
addons_test.go:614: (dbg) Run:  out/minikube-linux-amd64 -p addons-554886 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:614: (dbg) Done: out/minikube-linux-amd64 -p addons-554886 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.877839547s)
addons_test.go:618: (dbg) Run:  out/minikube-linux-amd64 -p addons-554886 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (62.86s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.7s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:800: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-554886 --alsologtostderr -v=1
addons_test.go:800: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-554886 --alsologtostderr -v=1: (1.652337258s)
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-699c48fb74-8w9jw" [ad3210cf-3754-41f6-89a2-f8128558feb4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-699c48fb74-8w9jw" [ad3210cf-3754-41f6-89a2-f8128558feb4] Running
addons_test.go:805: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.043741034s
--- PASS: TestAddons/parallel/Headlamp (15.70s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dcc56475c-l7vhj" [5c2a6c3f-40b4-4e77-a5fc-4ad32aee2862] Running
addons_test.go:833: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.36830184s
addons_test.go:836: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-554886
addons_test.go:836: (dbg) Done: out/minikube-linux-amd64 addons disable cloud-spanner -p addons-554886: (1.129040894s)
--- PASS: TestAddons/parallel/CloudSpanner (6.53s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:626: (dbg) Run:  kubectl --context addons-554886 create ns new-namespace
addons_test.go:640: (dbg) Run:  kubectl --context addons-554886 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.13s)

                                                
                                    
x
+
TestCertOptions (118.63s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-559775 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-559775 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m57.11142589s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-559775 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-559775 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-559775 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-559775" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-559775
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-559775: (1.036033862s)
--- PASS: TestCertOptions (118.63s)

                                                
                                    
x
+
TestCertExpiration (298.14s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-758549 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-758549 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (1m36.137899691s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-758549 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-758549 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (20.937750098s)
helpers_test.go:175: Cleaning up "cert-expiration-758549" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-758549
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-758549: (1.063720378s)
--- PASS: TestCertExpiration (298.14s)

                                                
                                    
x
+
TestForceSystemdFlag (85.86s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-044713 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0911 11:54:15.053050 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-044713 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m24.478030853s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-044713 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-044713" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-044713
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-044713: (1.176252972s)
--- PASS: TestForceSystemdFlag (85.86s)

                                                
                                    
x
+
TestForceSystemdEnv (115.68s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-901219 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-901219 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m54.629431093s)
helpers_test.go:175: Cleaning up "force-systemd-env-901219" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-901219
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-901219: (1.054743754s)
--- PASS: TestForceSystemdEnv (115.68s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.65s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.65s)

                                                
                                    
x
+
TestErrorSpam/setup (48.41s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-837139 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-837139 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-837139 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-837139 --driver=kvm2  --container-runtime=crio: (48.411169816s)
--- PASS: TestErrorSpam/setup (48.41s)

                                                
                                    
x
+
TestErrorSpam/start (0.37s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 start --dry-run
--- PASS: TestErrorSpam/start (0.37s)

                                                
                                    
x
+
TestErrorSpam/status (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 status
--- PASS: TestErrorSpam/status (0.80s)

                                                
                                    
x
+
TestErrorSpam/pause (1.57s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 pause
--- PASS: TestErrorSpam/pause (1.57s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.8s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 unpause
--- PASS: TestErrorSpam/unpause (1.80s)

                                                
                                    
x
+
TestErrorSpam/stop (2.25s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 stop: (2.089367122s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-837139 --log_dir /tmp/nospam-837139 stop
--- PASS: TestErrorSpam/stop (2.25s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/17223-2215273/.minikube/files/etc/test/nested/copy/2222471/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (65.03s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-312672 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-312672 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m5.027951227s)
--- PASS: TestFunctional/serial/StartWithProxy (65.03s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (52.99s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-312672 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-312672 --alsologtostderr -v=8: (52.988990181s)
functional_test.go:659: soft start took 52.989840202s for "functional-312672" cluster.
--- PASS: TestFunctional/serial/SoftStart (52.99s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-312672 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-312672 cache add registry.k8s.io/pause:3.1: (1.018178378s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-312672 cache add registry.k8s.io/pause:3.3: (1.080261988s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-312672 cache add registry.k8s.io/pause:latest: (1.117972711s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-312672 /tmp/TestFunctionalserialCacheCmdcacheadd_local2567911841/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 cache add minikube-local-cache-test:functional-312672
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 cache delete minikube-local-cache-test:functional-312672
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-312672
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.22s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-312672 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (220.485209ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.62s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 kubectl -- --context functional-312672 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.11s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-312672 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.11s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (37.5s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-312672 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-312672 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (37.496599251s)
functional_test.go:757: restart took 37.496756592s for "functional-312672" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (37.50s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-312672 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.08s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.48s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-312672 logs: (1.47811688s)
--- PASS: TestFunctional/serial/LogsCmd (1.48s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 logs --file /tmp/TestFunctionalserialLogsFileCmd1508614769/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-312672 logs --file /tmp/TestFunctionalserialLogsFileCmd1508614769/001/logs.txt: (1.510730726s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.51s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.33s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-312672 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-312672
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-312672: exit status 115 (322.490643ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.161:30972 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-312672 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.33s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-312672 config get cpus: exit status 14 (63.421895ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-312672 config get cpus: exit status 14 (49.09244ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (44.81s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-312672 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-312672 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 2229659: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (44.81s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-312672 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-312672 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (161.857779ms)

                                                
                                                
-- stdout --
	* [functional-312672] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:09:02.426504 2229396 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:09:02.426705 2229396 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:09:02.426715 2229396 out.go:309] Setting ErrFile to fd 2...
	I0911 11:09:02.426720 2229396 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:09:02.426951 2229396 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:09:02.427868 2229396 out.go:303] Setting JSON to false
	I0911 11:09:02.429022 2229396 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":233493,"bootTime":1694197049,"procs":242,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:09:02.429116 2229396 start.go:138] virtualization: kvm guest
	I0911 11:09:02.431593 2229396 out.go:177] * [functional-312672] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:09:02.433252 2229396 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:09:02.433309 2229396 notify.go:220] Checking for updates...
	I0911 11:09:02.434960 2229396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:09:02.436657 2229396 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:09:02.438387 2229396 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:09:02.440110 2229396 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:09:02.442087 2229396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:09:02.444948 2229396 config.go:182] Loaded profile config "functional-312672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:09:02.445404 2229396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:09:02.445475 2229396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:09:02.462620 2229396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43595
	I0911 11:09:02.463210 2229396 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:09:02.463868 2229396 main.go:141] libmachine: Using API Version  1
	I0911 11:09:02.463890 2229396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:09:02.464369 2229396 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:09:02.464629 2229396 main.go:141] libmachine: (functional-312672) Calling .DriverName
	I0911 11:09:02.464956 2229396 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:09:02.465277 2229396 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:09:02.465326 2229396 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:09:02.481695 2229396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36701
	I0911 11:09:02.482277 2229396 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:09:02.482824 2229396 main.go:141] libmachine: Using API Version  1
	I0911 11:09:02.482853 2229396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:09:02.483263 2229396 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:09:02.483494 2229396 main.go:141] libmachine: (functional-312672) Calling .DriverName
	I0911 11:09:02.522168 2229396 out.go:177] * Using the kvm2 driver based on existing profile
	I0911 11:09:02.524256 2229396 start.go:298] selected driver: kvm2
	I0911 11:09:02.524286 2229396 start.go:902] validating driver "kvm2" against &{Name:functional-312672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-312672 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.161 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:09:02.524478 2229396 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:09:02.527466 2229396 out.go:177] 
	W0911 11:09:02.529290 2229396 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0911 11:09:02.531073 2229396 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-312672 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-312672 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-312672 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (169.128844ms)

                                                
                                                
-- stdout --
	* [functional-312672] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:09:02.951315 2229451 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:09:02.951462 2229451 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:09:02.951472 2229451 out.go:309] Setting ErrFile to fd 2...
	I0911 11:09:02.951477 2229451 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:09:02.951819 2229451 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:09:02.952457 2229451 out.go:303] Setting JSON to false
	I0911 11:09:02.953728 2229451 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":233494,"bootTime":1694197049,"procs":246,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:09:02.953815 2229451 start.go:138] virtualization: kvm guest
	I0911 11:09:02.956552 2229451 out.go:177] * [functional-312672] minikube v1.31.2 sur Ubuntu 20.04 (kvm/amd64)
	I0911 11:09:02.958584 2229451 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:09:02.958667 2229451 notify.go:220] Checking for updates...
	I0911 11:09:02.960471 2229451 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:09:02.962388 2229451 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:09:02.964618 2229451 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:09:02.966350 2229451 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:09:02.968502 2229451 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:09:02.970648 2229451 config.go:182] Loaded profile config "functional-312672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:09:02.971102 2229451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:09:02.971188 2229451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:09:02.990798 2229451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42427
	I0911 11:09:02.991302 2229451 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:09:02.992019 2229451 main.go:141] libmachine: Using API Version  1
	I0911 11:09:02.992037 2229451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:09:02.992445 2229451 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:09:02.992689 2229451 main.go:141] libmachine: (functional-312672) Calling .DriverName
	I0911 11:09:02.993008 2229451 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:09:02.993467 2229451 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:09:02.993515 2229451 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:09:03.010812 2229451 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41735
	I0911 11:09:03.011735 2229451 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:09:03.012351 2229451 main.go:141] libmachine: Using API Version  1
	I0911 11:09:03.012376 2229451 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:09:03.012759 2229451 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:09:03.013033 2229451 main.go:141] libmachine: (functional-312672) Calling .DriverName
	I0911 11:09:03.057914 2229451 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0911 11:09:03.059579 2229451 start.go:298] selected driver: kvm2
	I0911 11:09:03.059607 2229451 start.go:902] validating driver "kvm2" against &{Name:functional-312672 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/17120/minikube-v1.31.0-1692872107-17120-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.40-1693938323-17174@sha256:4edc55cb1933a7155ece55408f8b4aebfd99e28fa2209bc82b369d8ca3bf525b Memory:4000 CPUs:2 DiskSize:20000 VMDriver: Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.28.1 ClusterName:functional-312672 Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8441 NodeName:} Nodes:[{Name: IP:192.168.39.161 Port:8441 KubernetesVersion:v1.28.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertE
xpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 AutoPauseInterval:1m0s}
	I0911 11:09:03.059800 2229451 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:09:03.062660 2229451 out.go:177] 
	W0911 11:09:03.064518 2229451 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0911 11:09:03.066424 2229451 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (12.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1628: (dbg) Run:  kubectl --context functional-312672 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1634: (dbg) Run:  kubectl --context functional-312672 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-tnwn5" [db0963ce-5ae0-4c34-ae05-d3afc146ccd3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-tnwn5" [db0963ce-5ae0-4c34-ae05-d3afc146ccd3] Running
functional_test.go:1639: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 12.022788759s
functional_test.go:1648: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 service hello-node-connect --url
functional_test.go:1654: found endpoint for hello-node-connect: http://192.168.39.161:30632
functional_test.go:1674: http://192.168.39.161:30632: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-tnwn5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.161:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.161:30632
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (12.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1689: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 addons list
functional_test.go:1701: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (55.76s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6f387760-90cf-4814-98b4-53985f709fdc] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.022182993s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-312672 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-312672 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-312672 get pvc myclaim -o=json
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-312672 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-312672 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [82b30c5d-4405-45ea-9ef3-70a8e4ce39df] Pending
helpers_test.go:344: "sp-pod" [82b30c5d-4405-45ea-9ef3-70a8e4ce39df] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [82b30c5d-4405-45ea-9ef3-70a8e4ce39df] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.014688263s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-312672 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-312672 delete -f testdata/storage-provisioner/pod.yaml
E0911 11:09:15.053765 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:09:15.060010 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:09:15.070383 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:09:15.090692 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:09:15.131219 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:09:15.211669 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:09:15.372210 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:09:15.693060 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-312672 delete -f testdata/storage-provisioner/pod.yaml: (3.65607662s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-312672 apply -f testdata/storage-provisioner/pod.yaml
E0911 11:09:16.333879 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [6aa370e5-3064-4138-9df9-16c70771051e] Pending
helpers_test.go:344: "sp-pod" [6aa370e5-3064-4138-9df9-16c70771051e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
E0911 11:09:17.614513 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:09:20.175304 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
helpers_test.go:344: "sp-pod" [6aa370e5-3064-4138-9df9-16c70771051e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.015316238s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-312672 exec sp-pod -- ls /tmp/mount
2023/09/11 11:09:47 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (55.76s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1724: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "echo hello"
functional_test.go:1741: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh -n functional-312672 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 cp functional-312672:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd606253258/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh -n functional-312672 "sudo cat /home/docker/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (34.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-312672 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-2vplt" [cbf81798-2809-4e6a-bbc6-8bfef5da16e2] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-2vplt" [cbf81798-2809-4e6a-bbc6-8bfef5da16e2] Running
E0911 11:09:25.296221 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 30.033338724s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-312672 exec mysql-859648c796-2vplt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-312672 exec mysql-859648c796-2vplt -- mysql -ppassword -e "show databases;": exit status 1 (394.471329ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-312672 exec mysql-859648c796-2vplt -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-312672 exec mysql-859648c796-2vplt -- mysql -ppassword -e "show databases;": exit status 1 (322.832027ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-312672 exec mysql-859648c796-2vplt -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (34.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/2222471/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "sudo cat /etc/test/nested/copy/2222471/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/2222471.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "sudo cat /etc/ssl/certs/2222471.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/2222471.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "sudo cat /usr/share/ca-certificates/2222471.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/22224712.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "sudo cat /etc/ssl/certs/22224712.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/22224712.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "sudo cat /usr/share/ca-certificates/22224712.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.58s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-312672 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-312672 ssh "sudo systemctl is-active docker": exit status 1 (237.994236ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-312672 ssh "sudo systemctl is-active containerd": exit status 1 (248.978774ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 version --short
--- PASS: TestFunctional/parallel/Version/short (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 version -o=json --components
E0911 11:09:35.536443 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/Version/components (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (12.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1438: (dbg) Run:  kubectl --context functional-312672 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1444: (dbg) Run:  kubectl --context functional-312672 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-rdq68" [171c3791-02ad-4db0-b71b-4f5cac94b006] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-rdq68" [171c3791-02ad-4db0-b71b-4f5cac94b006] Running
functional_test.go:1449: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 12.024653853s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (12.29s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1269: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1274: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1309: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1314: Took "300.332171ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1323: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1328: Took "52.805761ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1360: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1365: Took "360.86115ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1373: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1378: Took "52.959201ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-312672 /tmp/TestFunctionalparallelMountCmdany-port4157376547/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1694430530806810036" to /tmp/TestFunctionalparallelMountCmdany-port4157376547/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1694430530806810036" to /tmp/TestFunctionalparallelMountCmdany-port4157376547/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1694430530806810036" to /tmp/TestFunctionalparallelMountCmdany-port4157376547/001/test-1694430530806810036
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-312672 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (299.053285ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 11 11:08 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 11 11:08 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 11 11:08 test-1694430530806810036
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh cat /mount-9p/test-1694430530806810036
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-312672 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [3e20eb0f-84c0-4501-8a76-efcb00a17bda] Pending
helpers_test.go:344: "busybox-mount" [3e20eb0f-84c0-4501-8a76-efcb00a17bda] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [3e20eb0f-84c0-4501-8a76-efcb00a17bda] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [3e20eb0f-84c0-4501-8a76-efcb00a17bda] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.028980355s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-312672 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-312672 /tmp/TestFunctionalparallelMountCmdany-port4157376547/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1458: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-312672 /tmp/TestFunctionalparallelMountCmdspecific-port1465962791/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-312672 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (303.460603ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-312672 /tmp/TestFunctionalparallelMountCmdspecific-port1465962791/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-312672 ssh "sudo umount -f /mount-9p": exit status 1 (276.32191ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-312672 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-312672 /tmp/TestFunctionalparallelMountCmdspecific-port1465962791/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1488: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 service list -o json
functional_test.go:1493: Took "542.807302ms" to run "out/minikube-linux-amd64 -p functional-312672 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1508: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 service --namespace=default --https --url hello-node
functional_test.go:1521: found endpoint: https://192.168.39.161:31191
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1539: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1558: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 service hello-node --url
functional_test.go:1564: found endpoint for hello-node: http://192.168.39.161:31191
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-312672 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1182840966/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-312672 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1182840966/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-312672 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1182840966/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-312672 ssh "findmnt -T" /mount1: exit status 1 (376.936992ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-312672 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-312672 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1182840966/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-312672 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1182840966/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-312672 /tmp/TestFunctionalparallelMountCmdVerifyCleanup1182840966/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-312672 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.1
registry.k8s.io/kube-proxy:v1.28.1
registry.k8s.io/kube-controller-manager:v1.28.1
registry.k8s.io/kube-apiserver:v1.28.1
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-312672
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-312672
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230511-dc714da8
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-312672 image ls --format short --alsologtostderr:
I0911 11:09:37.958790 2230229 out.go:296] Setting OutFile to fd 1 ...
I0911 11:09:37.958925 2230229 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:09:37.958937 2230229 out.go:309] Setting ErrFile to fd 2...
I0911 11:09:37.958944 2230229 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:09:37.959154 2230229 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
I0911 11:09:37.959727 2230229 config.go:182] Loaded profile config "functional-312672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:09:37.959827 2230229 config.go:182] Loaded profile config "functional-312672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:09:37.960175 2230229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0911 11:09:37.960235 2230229 main.go:141] libmachine: Launching plugin server for driver kvm2
I0911 11:09:37.974768 2230229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43159
I0911 11:09:37.975263 2230229 main.go:141] libmachine: () Calling .GetVersion
I0911 11:09:37.975925 2230229 main.go:141] libmachine: Using API Version  1
I0911 11:09:37.975961 2230229 main.go:141] libmachine: () Calling .SetConfigRaw
I0911 11:09:37.976301 2230229 main.go:141] libmachine: () Calling .GetMachineName
I0911 11:09:37.976530 2230229 main.go:141] libmachine: (functional-312672) Calling .GetState
I0911 11:09:37.978947 2230229 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0911 11:09:37.978998 2230229 main.go:141] libmachine: Launching plugin server for driver kvm2
I0911 11:09:37.995285 2230229 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37849
I0911 11:09:37.995778 2230229 main.go:141] libmachine: () Calling .GetVersion
I0911 11:09:37.996434 2230229 main.go:141] libmachine: Using API Version  1
I0911 11:09:37.996450 2230229 main.go:141] libmachine: () Calling .SetConfigRaw
I0911 11:09:37.996853 2230229 main.go:141] libmachine: () Calling .GetMachineName
I0911 11:09:37.997086 2230229 main.go:141] libmachine: (functional-312672) Calling .DriverName
I0911 11:09:37.997311 2230229 ssh_runner.go:195] Run: systemctl --version
I0911 11:09:37.997336 2230229 main.go:141] libmachine: (functional-312672) Calling .GetSSHHostname
I0911 11:09:38.000784 2230229 main.go:141] libmachine: (functional-312672) DBG | domain functional-312672 has defined MAC address 52:54:00:32:10:7a in network mk-functional-312672
I0911 11:09:38.001192 2230229 main.go:141] libmachine: (functional-312672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:10:7a", ip: ""} in network mk-functional-312672: {Iface:virbr1 ExpiryTime:2023-09-11 12:06:14 +0000 UTC Type:0 Mac:52:54:00:32:10:7a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:functional-312672 Clientid:01:52:54:00:32:10:7a}
I0911 11:09:38.001230 2230229 main.go:141] libmachine: (functional-312672) DBG | domain functional-312672 has defined IP address 192.168.39.161 and MAC address 52:54:00:32:10:7a in network mk-functional-312672
I0911 11:09:38.001410 2230229 main.go:141] libmachine: (functional-312672) Calling .GetSSHPort
I0911 11:09:38.001674 2230229 main.go:141] libmachine: (functional-312672) Calling .GetSSHKeyPath
I0911 11:09:38.001924 2230229 main.go:141] libmachine: (functional-312672) Calling .GetSSHUsername
I0911 11:09:38.002075 2230229 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/functional-312672/id_rsa Username:docker}
I0911 11:09:38.088765 2230229 ssh_runner.go:195] Run: sudo crictl images --output json
I0911 11:09:38.142922 2230229 main.go:141] libmachine: Making call to close driver server
I0911 11:09:38.142943 2230229 main.go:141] libmachine: (functional-312672) Calling .Close
I0911 11:09:38.143286 2230229 main.go:141] libmachine: Successfully made call to close driver server
I0911 11:09:38.143342 2230229 main.go:141] libmachine: Making call to close connection to plugin binary
I0911 11:09:38.143367 2230229 main.go:141] libmachine: Making call to close driver server
I0911 11:09:38.143379 2230229 main.go:141] libmachine: (functional-312672) Calling .Close
I0911 11:09:38.143707 2230229 main.go:141] libmachine: Successfully made call to close driver server
I0911 11:09:38.143731 2230229 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-312672 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/kube-apiserver          | v1.28.1            | 5c801295c21d0 | 127MB  |
| registry.k8s.io/kube-scheduler          | v1.28.1            | b462ce0c8b1ff | 61.5MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20230511-dc714da8 | b0b1fa0f58c6e | 65.2MB |
| gcr.io/google-containers/addon-resizer  | functional-312672  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| docker.io/library/nginx                 | latest             | f5a6b296b8a29 | 191MB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/minikube-local-cache-test     | functional-312672  | f642a73b0edff | 3.35kB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-controller-manager | v1.28.1            | 821b3dfea27be | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.1            | 6cdbabde3874e | 74.7MB |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| docker.io/library/mysql                 | 5.7                | 92034fe9a41f4 | 601MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-312672 image ls --format table --alsologtostderr:
I0911 11:09:38.200117 2230280 out.go:296] Setting OutFile to fd 1 ...
I0911 11:09:38.200249 2230280 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:09:38.200259 2230280 out.go:309] Setting ErrFile to fd 2...
I0911 11:09:38.200263 2230280 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:09:38.200481 2230280 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
I0911 11:09:38.201238 2230280 config.go:182] Loaded profile config "functional-312672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:09:38.201347 2230280 config.go:182] Loaded profile config "functional-312672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:09:38.201683 2230280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0911 11:09:38.201738 2230280 main.go:141] libmachine: Launching plugin server for driver kvm2
I0911 11:09:38.218426 2230280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41737
I0911 11:09:38.218909 2230280 main.go:141] libmachine: () Calling .GetVersion
I0911 11:09:38.219652 2230280 main.go:141] libmachine: Using API Version  1
I0911 11:09:38.219714 2230280 main.go:141] libmachine: () Calling .SetConfigRaw
I0911 11:09:38.220100 2230280 main.go:141] libmachine: () Calling .GetMachineName
I0911 11:09:38.220330 2230280 main.go:141] libmachine: (functional-312672) Calling .GetState
I0911 11:09:38.222295 2230280 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0911 11:09:38.222357 2230280 main.go:141] libmachine: Launching plugin server for driver kvm2
I0911 11:09:38.239361 2230280 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39211
I0911 11:09:38.239875 2230280 main.go:141] libmachine: () Calling .GetVersion
I0911 11:09:38.240472 2230280 main.go:141] libmachine: Using API Version  1
I0911 11:09:38.240503 2230280 main.go:141] libmachine: () Calling .SetConfigRaw
I0911 11:09:38.240894 2230280 main.go:141] libmachine: () Calling .GetMachineName
I0911 11:09:38.241102 2230280 main.go:141] libmachine: (functional-312672) Calling .DriverName
I0911 11:09:38.241297 2230280 ssh_runner.go:195] Run: systemctl --version
I0911 11:09:38.241336 2230280 main.go:141] libmachine: (functional-312672) Calling .GetSSHHostname
I0911 11:09:38.244671 2230280 main.go:141] libmachine: (functional-312672) DBG | domain functional-312672 has defined MAC address 52:54:00:32:10:7a in network mk-functional-312672
I0911 11:09:38.245114 2230280 main.go:141] libmachine: (functional-312672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:10:7a", ip: ""} in network mk-functional-312672: {Iface:virbr1 ExpiryTime:2023-09-11 12:06:14 +0000 UTC Type:0 Mac:52:54:00:32:10:7a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:functional-312672 Clientid:01:52:54:00:32:10:7a}
I0911 11:09:38.245145 2230280 main.go:141] libmachine: (functional-312672) DBG | domain functional-312672 has defined IP address 192.168.39.161 and MAC address 52:54:00:32:10:7a in network mk-functional-312672
I0911 11:09:38.245322 2230280 main.go:141] libmachine: (functional-312672) Calling .GetSSHPort
I0911 11:09:38.245584 2230280 main.go:141] libmachine: (functional-312672) Calling .GetSSHKeyPath
I0911 11:09:38.245789 2230280 main.go:141] libmachine: (functional-312672) Calling .GetSSHUsername
I0911 11:09:38.245997 2230280 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/functional-312672/id_rsa Username:docker}
I0911 11:09:38.332836 2230280 ssh_runner.go:195] Run: sudo crictl images --output json
I0911 11:09:38.390220 2230280 main.go:141] libmachine: Making call to close driver server
I0911 11:09:38.390243 2230280 main.go:141] libmachine: (functional-312672) Calling .Close
I0911 11:09:38.390583 2230280 main.go:141] libmachine: Successfully made call to close driver server
I0911 11:09:38.390611 2230280 main.go:141] libmachine: Making call to close connection to plugin binary
I0911 11:09:38.390622 2230280 main.go:141] libmachine: Making call to close driver server
I0911 11:09:38.390630 2230280 main.go:141] libmachine: (functional-312672) Calling .Close
I0911 11:09:38.390881 2230280 main.go:141] libmachine: Successfully made call to close driver server
I0911 11:09:38.390908 2230280 main.go:141] libmachine: (functional-312672) DBG | Closing plugin on server side
I0911 11:09:38.390914 2230280 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-312672 image ls --format json --alsologtostderr:
[{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77","repoDigests":["registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774","registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2"],"repoTags":["registry.k8s.io/kube-apiserver:v1.28.1"],"size":"126972880"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf
4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-312672"],"size":"34114467"},{"id":"f642a73b0edff58df6dcb86503b4ec910b5e964bb559f4d7a1ea76063795781c","repoDigests":["localhost/minikube-local-cache-test@sha256:8ca1fa759cf91e8782741d59cdc1e9cfd145a3b8ba69f54c1a9e18999ad0335d"],"repoTags":["localhost/minikube-local-cache-test:functional-312672"],"size":"3345"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e0
13d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags"
:[],"size":"249229937"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4","registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.1"],"size":"61477686"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"f5a6b296b8a29b4e3d89ffa99e4a86
309874ae400e82b3d3993f84e1e3bb0eb9","repoDigests":["docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153","docker.io/library/nginx@sha256:9504f3f64a3f16f0eaf9adca3542ff8b2a6880e6abfb13e478cca23f6380080a"],"repoTags":["docker.io/library/nginx:latest"],"size":"190820093"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-mi
nikube/storage-provisioner:v5"],"size":"31470524"},{"id":"821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830","registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.1"],"size":"123163446"},{"id":"6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5","repoDigests":["registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3","registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.1"],"size":"74680215"},{"id":"b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da","repoDigests":["docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974","docker.io/kindest/kindn
etd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9"],"repoTags":["docker.io/kindest/kindnetd:v20230511-dc714da8"],"size":"65249302"},{"id":"92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d","repoDigests":["docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83","docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83"],"repoTags":["docker.io/library/mysql:5.7"],"size":"601277093"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-312672 image ls --format json --alsologtostderr:
I0911 11:09:38.189273 2230274 out.go:296] Setting OutFile to fd 1 ...
I0911 11:09:38.189430 2230274 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:09:38.189440 2230274 out.go:309] Setting ErrFile to fd 2...
I0911 11:09:38.189445 2230274 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:09:38.189654 2230274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
I0911 11:09:38.190286 2230274 config.go:182] Loaded profile config "functional-312672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:09:38.190388 2230274 config.go:182] Loaded profile config "functional-312672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:09:38.190735 2230274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0911 11:09:38.190803 2230274 main.go:141] libmachine: Launching plugin server for driver kvm2
I0911 11:09:38.206359 2230274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38207
I0911 11:09:38.206904 2230274 main.go:141] libmachine: () Calling .GetVersion
I0911 11:09:38.207615 2230274 main.go:141] libmachine: Using API Version  1
I0911 11:09:38.207671 2230274 main.go:141] libmachine: () Calling .SetConfigRaw
I0911 11:09:38.208124 2230274 main.go:141] libmachine: () Calling .GetMachineName
I0911 11:09:38.208332 2230274 main.go:141] libmachine: (functional-312672) Calling .GetState
I0911 11:09:38.210268 2230274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0911 11:09:38.210330 2230274 main.go:141] libmachine: Launching plugin server for driver kvm2
I0911 11:09:38.229036 2230274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45321
I0911 11:09:38.229517 2230274 main.go:141] libmachine: () Calling .GetVersion
I0911 11:09:38.230049 2230274 main.go:141] libmachine: Using API Version  1
I0911 11:09:38.230068 2230274 main.go:141] libmachine: () Calling .SetConfigRaw
I0911 11:09:38.230467 2230274 main.go:141] libmachine: () Calling .GetMachineName
I0911 11:09:38.230731 2230274 main.go:141] libmachine: (functional-312672) Calling .DriverName
I0911 11:09:38.230986 2230274 ssh_runner.go:195] Run: systemctl --version
I0911 11:09:38.231021 2230274 main.go:141] libmachine: (functional-312672) Calling .GetSSHHostname
I0911 11:09:38.234317 2230274 main.go:141] libmachine: (functional-312672) DBG | domain functional-312672 has defined MAC address 52:54:00:32:10:7a in network mk-functional-312672
I0911 11:09:38.234762 2230274 main.go:141] libmachine: (functional-312672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:10:7a", ip: ""} in network mk-functional-312672: {Iface:virbr1 ExpiryTime:2023-09-11 12:06:14 +0000 UTC Type:0 Mac:52:54:00:32:10:7a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:functional-312672 Clientid:01:52:54:00:32:10:7a}
I0911 11:09:38.234797 2230274 main.go:141] libmachine: (functional-312672) DBG | domain functional-312672 has defined IP address 192.168.39.161 and MAC address 52:54:00:32:10:7a in network mk-functional-312672
I0911 11:09:38.234973 2230274 main.go:141] libmachine: (functional-312672) Calling .GetSSHPort
I0911 11:09:38.235224 2230274 main.go:141] libmachine: (functional-312672) Calling .GetSSHKeyPath
I0911 11:09:38.235549 2230274 main.go:141] libmachine: (functional-312672) Calling .GetSSHUsername
I0911 11:09:38.235722 2230274 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/functional-312672/id_rsa Username:docker}
I0911 11:09:38.323499 2230274 ssh_runner.go:195] Run: sudo crictl images --output json
I0911 11:09:38.368724 2230274 main.go:141] libmachine: Making call to close driver server
I0911 11:09:38.368742 2230274 main.go:141] libmachine: (functional-312672) Calling .Close
I0911 11:09:38.369109 2230274 main.go:141] libmachine: Successfully made call to close driver server
I0911 11:09:38.369164 2230274 main.go:141] libmachine: Making call to close connection to plugin binary
I0911 11:09:38.369173 2230274 main.go:141] libmachine: Making call to close driver server
I0911 11:09:38.369185 2230274 main.go:141] libmachine: (functional-312672) Calling .Close
I0911 11:09:38.369190 2230274 main.go:141] libmachine: (functional-312672) DBG | Closing plugin on server side
I0911 11:09:38.369423 2230274 main.go:141] libmachine: Successfully made call to close driver server
I0911 11:09:38.369444 2230274 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-312672 image ls --format yaml --alsologtostderr:
- id: 821b3dfea27be94a3834878bec6f36d332c83250be3e3c2a2e2233575ebc9bac
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:6942efe22c2422615c52705de2ad58cf5639bdb1610fbbfb6606dbad74690830
- registry.k8s.io/kube-controller-manager@sha256:dda6dba8a55203ed1595efcda865a526b9282c2d9b959e9ed0a88f54a7a91195
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.1
size: "123163446"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: f5a6b296b8a29b4e3d89ffa99e4a86309874ae400e82b3d3993f84e1e3bb0eb9
repoDigests:
- docker.io/library/nginx@sha256:6926dd802f40e5e7257fded83e0d8030039642e4e10c4a98a6478e9c6fe06153
- docker.io/library/nginx@sha256:9504f3f64a3f16f0eaf9adca3542ff8b2a6880e6abfb13e478cca23f6380080a
repoTags:
- docker.io/library/nginx:latest
size: "190820093"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-312672
size: "34114467"
- id: 5c801295c21d0de2947ad600b9388f090f0f7ff22add9d9d95be82fa12288f77
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:76f9aca3daf38cf3aacd91d83954932b04e0f727a37c55a10dc6165355c43774
- registry.k8s.io/kube-apiserver@sha256:f517207d13adeb50c63f9bdac2824e0e7512817eca47ac0540685771243742b2
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.1
size: "126972880"
- id: b462ce0c8b1ff16d466c6e8c9fcae54ec740fdeb73af6e637b77eea36246054a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:3c9249a1f7623007a8db3522eac203f94cbb3910501879b792d95ea8470cc3d4
- registry.k8s.io/kube-scheduler@sha256:7e621071b5174e9c6c0e0268ddbbc9139d6cba29052bbb1131890bf91d06bf1e
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.1
size: "61477686"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: b0b1fa0f58c6e932b7f20bf208b2841317a1e8c88cc51b18358310bbd8ec95da
repoDigests:
- docker.io/kindest/kindnetd@sha256:6c00e28db008c2afa67d9ee085c86184ec9ae5281d5ae1bd15006746fb9a1974
- docker.io/kindest/kindnetd@sha256:7c15172bd152f05b102cea9c8f82ef5abeb56797ec85630923fb98d20fd519e9
repoTags:
- docker.io/kindest/kindnetd:v20230511-dc714da8
size: "65249302"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 6cdbabde3874e1eca92441870b0ddeaef0edb514c3b3e2a3d5ade845b500bba5
repoDigests:
- registry.k8s.io/kube-proxy@sha256:00cfd892b4087d5194c933629a585d174dd894dddfb98d0f8a325aa17a2b27e3
- registry.k8s.io/kube-proxy@sha256:30096ad233e7bfe72662180c5ac4497f732346d6d25b7c1f1c0c7cb1a1e7e41c
repoTags:
- registry.k8s.io/kube-proxy:v1.28.1
size: "74680215"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: f642a73b0edff58df6dcb86503b4ec910b5e964bb559f4d7a1ea76063795781c
repoDigests:
- localhost/minikube-local-cache-test@sha256:8ca1fa759cf91e8782741d59cdc1e9cfd145a3b8ba69f54c1a9e18999ad0335d
repoTags:
- localhost/minikube-local-cache-test:functional-312672
size: "3345"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 92034fe9a41f4344b97f3fc88a8796248e2cfa9b934be58379f3dbc150d07d9d
repoDigests:
- docker.io/library/mysql@sha256:2c23f254c6b9444ecda9ba36051a9800e8934a2f5828ecc8730531db8142af83
- docker.io/library/mysql@sha256:aaa1374f1e6c24d73e9dfa8f2cdae81c8054e6d1d80c32da883a9050258b6e83
repoTags:
- docker.io/library/mysql:5.7
size: "601277093"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-312672 image ls --format yaml --alsologtostderr:
I0911 11:09:37.956352 2230228 out.go:296] Setting OutFile to fd 1 ...
I0911 11:09:37.956507 2230228 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:09:37.956519 2230228 out.go:309] Setting ErrFile to fd 2...
I0911 11:09:37.956526 2230228 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:09:37.956772 2230228 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
I0911 11:09:37.957425 2230228 config.go:182] Loaded profile config "functional-312672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:09:37.957548 2230228 config.go:182] Loaded profile config "functional-312672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:09:37.957985 2230228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0911 11:09:37.958045 2230228 main.go:141] libmachine: Launching plugin server for driver kvm2
I0911 11:09:37.974791 2230228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37773
I0911 11:09:37.975397 2230228 main.go:141] libmachine: () Calling .GetVersion
I0911 11:09:37.976118 2230228 main.go:141] libmachine: Using API Version  1
I0911 11:09:37.976148 2230228 main.go:141] libmachine: () Calling .SetConfigRaw
I0911 11:09:37.976576 2230228 main.go:141] libmachine: () Calling .GetMachineName
I0911 11:09:37.976845 2230228 main.go:141] libmachine: (functional-312672) Calling .GetState
I0911 11:09:37.979105 2230228 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0911 11:09:37.979168 2230228 main.go:141] libmachine: Launching plugin server for driver kvm2
I0911 11:09:37.995304 2230228 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41837
I0911 11:09:37.995778 2230228 main.go:141] libmachine: () Calling .GetVersion
I0911 11:09:37.996351 2230228 main.go:141] libmachine: Using API Version  1
I0911 11:09:37.996381 2230228 main.go:141] libmachine: () Calling .SetConfigRaw
I0911 11:09:37.996780 2230228 main.go:141] libmachine: () Calling .GetMachineName
I0911 11:09:37.997027 2230228 main.go:141] libmachine: (functional-312672) Calling .DriverName
I0911 11:09:37.997263 2230228 ssh_runner.go:195] Run: systemctl --version
I0911 11:09:37.997293 2230228 main.go:141] libmachine: (functional-312672) Calling .GetSSHHostname
I0911 11:09:38.000333 2230228 main.go:141] libmachine: (functional-312672) DBG | domain functional-312672 has defined MAC address 52:54:00:32:10:7a in network mk-functional-312672
I0911 11:09:38.000776 2230228 main.go:141] libmachine: (functional-312672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:10:7a", ip: ""} in network mk-functional-312672: {Iface:virbr1 ExpiryTime:2023-09-11 12:06:14 +0000 UTC Type:0 Mac:52:54:00:32:10:7a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:functional-312672 Clientid:01:52:54:00:32:10:7a}
I0911 11:09:38.000828 2230228 main.go:141] libmachine: (functional-312672) DBG | domain functional-312672 has defined IP address 192.168.39.161 and MAC address 52:54:00:32:10:7a in network mk-functional-312672
I0911 11:09:38.000919 2230228 main.go:141] libmachine: (functional-312672) Calling .GetSSHPort
I0911 11:09:38.001116 2230228 main.go:141] libmachine: (functional-312672) Calling .GetSSHKeyPath
I0911 11:09:38.001279 2230228 main.go:141] libmachine: (functional-312672) Calling .GetSSHUsername
I0911 11:09:38.001440 2230228 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/functional-312672/id_rsa Username:docker}
I0911 11:09:38.092571 2230228 ssh_runner.go:195] Run: sudo crictl images --output json
I0911 11:09:38.132739 2230228 main.go:141] libmachine: Making call to close driver server
I0911 11:09:38.132760 2230228 main.go:141] libmachine: (functional-312672) Calling .Close
I0911 11:09:38.133113 2230228 main.go:141] libmachine: (functional-312672) DBG | Closing plugin on server side
I0911 11:09:38.133175 2230228 main.go:141] libmachine: Successfully made call to close driver server
I0911 11:09:38.133199 2230228 main.go:141] libmachine: Making call to close connection to plugin binary
I0911 11:09:38.133211 2230228 main.go:141] libmachine: Making call to close driver server
I0911 11:09:38.133222 2230228 main.go:141] libmachine: (functional-312672) Calling .Close
I0911 11:09:38.133502 2230228 main.go:141] libmachine: (functional-312672) DBG | Closing plugin on server side
I0911 11:09:38.133507 2230228 main.go:141] libmachine: Successfully made call to close driver server
I0911 11:09:38.133525 2230228 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-312672 ssh pgrep buildkitd: exit status 1 (199.569847ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image build -t localhost/my-image:functional-312672 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-312672 image build -t localhost/my-image:functional-312672 testdata/build --alsologtostderr: (2.244947715s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-312672 image build -t localhost/my-image:functional-312672 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> b6f2040d03f
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-312672
--> 8816232de1c
Successfully tagged localhost/my-image:functional-312672
8816232de1c142c8c2cddfba2cf233f00044487fc53f7ce5e35bdbca720501b5
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-312672 image build -t localhost/my-image:functional-312672 testdata/build --alsologtostderr:
I0911 11:09:38.620341 2230351 out.go:296] Setting OutFile to fd 1 ...
I0911 11:09:38.620499 2230351 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:09:38.620508 2230351 out.go:309] Setting ErrFile to fd 2...
I0911 11:09:38.620512 2230351 out.go:343] TERM=,COLORTERM=, which probably does not support color
I0911 11:09:38.620715 2230351 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
I0911 11:09:38.621354 2230351 config.go:182] Loaded profile config "functional-312672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:09:38.621932 2230351 config.go:182] Loaded profile config "functional-312672": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
I0911 11:09:38.622306 2230351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0911 11:09:38.622347 2230351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0911 11:09:38.637974 2230351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42453
I0911 11:09:38.638537 2230351 main.go:141] libmachine: () Calling .GetVersion
I0911 11:09:38.639150 2230351 main.go:141] libmachine: Using API Version  1
I0911 11:09:38.639190 2230351 main.go:141] libmachine: () Calling .SetConfigRaw
I0911 11:09:38.639577 2230351 main.go:141] libmachine: () Calling .GetMachineName
I0911 11:09:38.639795 2230351 main.go:141] libmachine: (functional-312672) Calling .GetState
I0911 11:09:38.641786 2230351 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0911 11:09:38.641831 2230351 main.go:141] libmachine: Launching plugin server for driver kvm2
I0911 11:09:38.657429 2230351 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38331
I0911 11:09:38.657935 2230351 main.go:141] libmachine: () Calling .GetVersion
I0911 11:09:38.658476 2230351 main.go:141] libmachine: Using API Version  1
I0911 11:09:38.658493 2230351 main.go:141] libmachine: () Calling .SetConfigRaw
I0911 11:09:38.658888 2230351 main.go:141] libmachine: () Calling .GetMachineName
I0911 11:09:38.659136 2230351 main.go:141] libmachine: (functional-312672) Calling .DriverName
I0911 11:09:38.659388 2230351 ssh_runner.go:195] Run: systemctl --version
I0911 11:09:38.659418 2230351 main.go:141] libmachine: (functional-312672) Calling .GetSSHHostname
I0911 11:09:38.662400 2230351 main.go:141] libmachine: (functional-312672) DBG | domain functional-312672 has defined MAC address 52:54:00:32:10:7a in network mk-functional-312672
I0911 11:09:38.662779 2230351 main.go:141] libmachine: (functional-312672) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:32:10:7a", ip: ""} in network mk-functional-312672: {Iface:virbr1 ExpiryTime:2023-09-11 12:06:14 +0000 UTC Type:0 Mac:52:54:00:32:10:7a Iaid: IPaddr:192.168.39.161 Prefix:24 Hostname:functional-312672 Clientid:01:52:54:00:32:10:7a}
I0911 11:09:38.662816 2230351 main.go:141] libmachine: (functional-312672) DBG | domain functional-312672 has defined IP address 192.168.39.161 and MAC address 52:54:00:32:10:7a in network mk-functional-312672
I0911 11:09:38.662926 2230351 main.go:141] libmachine: (functional-312672) Calling .GetSSHPort
I0911 11:09:38.663134 2230351 main.go:141] libmachine: (functional-312672) Calling .GetSSHKeyPath
I0911 11:09:38.663343 2230351 main.go:141] libmachine: (functional-312672) Calling .GetSSHUsername
I0911 11:09:38.663468 2230351 sshutil.go:53] new ssh client: &{IP:192.168.39.161 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/functional-312672/id_rsa Username:docker}
I0911 11:09:38.744444 2230351 build_images.go:151] Building image from path: /tmp/build.193558421.tar
I0911 11:09:38.744538 2230351 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0911 11:09:38.757630 2230351 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.193558421.tar
I0911 11:09:38.763601 2230351 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.193558421.tar: stat -c "%s %y" /var/lib/minikube/build/build.193558421.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.193558421.tar': No such file or directory
I0911 11:09:38.763665 2230351 ssh_runner.go:362] scp /tmp/build.193558421.tar --> /var/lib/minikube/build/build.193558421.tar (3072 bytes)
I0911 11:09:38.796834 2230351 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.193558421
I0911 11:09:38.807239 2230351 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.193558421 -xf /var/lib/minikube/build/build.193558421.tar
I0911 11:09:38.817758 2230351 crio.go:297] Building image: /var/lib/minikube/build/build.193558421
I0911 11:09:38.817851 2230351 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-312672 /var/lib/minikube/build/build.193558421 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0911 11:09:40.791339 2230351 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-312672 /var/lib/minikube/build/build.193558421 --cgroup-manager=cgroupfs: (1.973459762s)
I0911 11:09:40.791435 2230351 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.193558421
I0911 11:09:40.803120 2230351 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.193558421.tar
I0911 11:09:40.814444 2230351 build_images.go:207] Built localhost/my-image:functional-312672 from /tmp/build.193558421.tar
I0911 11:09:40.814493 2230351 build_images.go:123] succeeded building to: functional-312672
I0911 11:09:40.814499 2230351 build_images.go:124] failed building to: 
I0911 11:09:40.814582 2230351 main.go:141] libmachine: Making call to close driver server
I0911 11:09:40.814607 2230351 main.go:141] libmachine: (functional-312672) Calling .Close
I0911 11:09:40.814967 2230351 main.go:141] libmachine: Successfully made call to close driver server
I0911 11:09:40.815016 2230351 main.go:141] libmachine: Making call to close connection to plugin binary
I0911 11:09:40.815040 2230351 main.go:141] libmachine: Making call to close driver server
I0911 11:09:40.815050 2230351 main.go:141] libmachine: (functional-312672) Calling .Close
I0911 11:09:40.815049 2230351 main.go:141] libmachine: (functional-312672) DBG | Closing plugin on server side
I0911 11:09:40.815346 2230351 main.go:141] libmachine: (functional-312672) DBG | Closing plugin on server side
I0911 11:09:40.815384 2230351 main.go:141] libmachine: Successfully made call to close driver server
I0911 11:09:40.815406 2230351 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.15810154s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-312672
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image load --daemon gcr.io/google-containers/addon-resizer:functional-312672 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-312672 image load --daemon gcr.io/google-containers/addon-resizer:functional-312672 --alsologtostderr: (5.620475569s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (5.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image load --daemon gcr.io/google-containers/addon-resizer:functional-312672 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-312672 image load --daemon gcr.io/google-containers/addon-resizer:functional-312672 --alsologtostderr: (4.737691135s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (5.02s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-312672
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image load --daemon gcr.io/google-containers/addon-resizer:functional-312672 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-312672 image load --daemon gcr.io/google-containers/addon-resizer:functional-312672 --alsologtostderr: (10.51174429s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (11.72s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image save gcr.io/google-containers/addon-resizer:functional-312672 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-312672 image save gcr.io/google-containers/addon-resizer:functional-312672 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (2.170438885s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (2.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image rm gcr.io/google-containers/addon-resizer:functional-312672 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-312672 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (5.500151744s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (5.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-312672
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-312672 image save --daemon gcr.io/google-containers/addon-resizer:functional-312672 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-312672 image save --daemon gcr.io/google-containers/addon-resizer:functional-312672 --alsologtostderr: (1.359899185s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-312672
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.40s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-312672
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-312672
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-312672
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestIngressAddonLegacy/StartLegacyK8sCluster (78.38s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
ingress_addon_legacy_test.go:39: (dbg) Run:  out/minikube-linux-amd64 start -p ingress-addon-legacy-508741 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
E0911 11:09:56.017100 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:10:36.978528 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
ingress_addon_legacy_test.go:39: (dbg) Done: out/minikube-linux-amd64 start -p ingress-addon-legacy-508741 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m18.383578194s)
--- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (78.38s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.56s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
ingress_addon_legacy_test.go:70: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-508741 addons enable ingress --alsologtostderr -v=5
ingress_addon_legacy_test.go:70: (dbg) Done: out/minikube-linux-amd64 -p ingress-addon-legacy-508741 addons enable ingress --alsologtostderr -v=5: (14.560976953s)
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (14.56s)

                                                
                                    
x
+
TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                                
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
ingress_addon_legacy_test.go:79: (dbg) Run:  out/minikube-linux-amd64 -p ingress-addon-legacy-508741 addons enable ingress-dns --alsologtostderr -v=5
--- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.61s)

                                                
                                    
x
+
TestJSONOutput/start/Command (64.39s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-152663 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0911 11:14:15.053445 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:14:28.534073 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:14:42.743651 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:15:09.494253 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-152663 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m4.389154058s)
--- PASS: TestJSONOutput/start/Command (64.39s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-152663 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-152663 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.11s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-152663 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-152663 --output=json --user=testUser: (7.109023396s)
--- PASS: TestJSONOutput/stop/Command (7.11s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.21s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-341914 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-341914 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (67.896593ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"69786f9d-2cdb-44ef-b853-a342e5edcd2d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-341914] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"0076abf8-219c-44cc-8ea3-8135de502077","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=17223"}}
	{"specversion":"1.0","id":"72395128-471a-4ba8-a4ed-09e9c48fdf71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dbd1bc5d-944e-4ce5-bf4b-350fbc0de65d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig"}}
	{"specversion":"1.0","id":"4319fc23-4629-4eba-9ad3-a44517d3db30","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube"}}
	{"specversion":"1.0","id":"1d4ec495-8210-4846-8186-4bc9e13ef2e3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"75a070fc-b3a0-47b2-a531-6d006e32496a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"e3d15f96-e011-4ab3-97b9-bb11c5a24b4c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-341914" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-341914
--- PASS: TestErrorJSONOutput (0.21s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (100.9s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-838086 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-838086 --driver=kvm2  --container-runtime=crio: (46.852013169s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-842240 --driver=kvm2  --container-runtime=crio
E0911 11:16:22.845123 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:16:22.850538 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:16:22.860947 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:16:22.881359 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:16:22.921745 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:16:23.002154 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:16:23.162577 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:16:23.483247 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:16:24.124246 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:16:25.404975 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:16:27.965471 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:16:31.416128 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:16:33.086145 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:16:43.327151 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-842240 --driver=kvm2  --container-runtime=crio: (51.09688689s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-838086
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-842240
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-842240" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-842240
E0911 11:17:03.808344 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-842240: (1.034578564s)
helpers_test.go:175: Cleaning up "first-838086" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-838086
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-838086: (1.044168435s)
--- PASS: TestMinikubeProfile (100.90s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (28.65s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-051535 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-051535 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.647493087s)
--- PASS: TestMountStart/serial/StartWithMountFirst (28.65s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-051535 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-051535 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (30.91s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-068762 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0911 11:17:44.769120 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-068762 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.904403964s)
--- PASS: TestMountStart/serial/StartWithMountSecond (30.91s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-068762 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-068762 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.45s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.9s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-051535 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.90s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-068762 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-068762 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-068762
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-068762: (1.167267385s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (26.22s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-068762
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-068762: (25.214761442s)
--- PASS: TestMountStart/serial/RestartStopped (26.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-068762 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-068762 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.39s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (111.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:85: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-378707 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0911 11:18:47.570189 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:19:06.690032 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:19:15.053522 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:19:15.256944 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
multinode_test.go:85: (dbg) Done: out/minikube-linux-amd64 start -p multinode-378707 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m50.998912349s)
multinode_test.go:91: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (111.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:481: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:486: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- rollout status deployment/busybox
multinode_test.go:486: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-378707 -- rollout status deployment/busybox: (4.226721096s)
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:516: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- exec busybox-5bc68d56bd-4jnst -- nslookup kubernetes.io
multinode_test.go:524: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- exec busybox-5bc68d56bd-f9d7x -- nslookup kubernetes.io
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- exec busybox-5bc68d56bd-4jnst -- nslookup kubernetes.default
multinode_test.go:534: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- exec busybox-5bc68d56bd-f9d7x -- nslookup kubernetes.default
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- exec busybox-5bc68d56bd-4jnst -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:542: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-378707 -- exec busybox-5bc68d56bd-f9d7x -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (41.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:110: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-378707 -v 3 --alsologtostderr
multinode_test.go:110: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-378707 -v 3 --alsologtostderr: (40.868926246s)
multinode_test.go:116: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (41.46s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:132: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.22s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:173: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 cp testdata/cp-test.txt multinode-378707:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 cp multinode-378707:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile813539875/001/cp-test_multinode-378707.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 cp multinode-378707:/home/docker/cp-test.txt multinode-378707-m02:/home/docker/cp-test_multinode-378707_multinode-378707-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707-m02 "sudo cat /home/docker/cp-test_multinode-378707_multinode-378707-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 cp multinode-378707:/home/docker/cp-test.txt multinode-378707-m03:/home/docker/cp-test_multinode-378707_multinode-378707-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707-m03 "sudo cat /home/docker/cp-test_multinode-378707_multinode-378707-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 cp testdata/cp-test.txt multinode-378707-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 cp multinode-378707-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile813539875/001/cp-test_multinode-378707-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 cp multinode-378707-m02:/home/docker/cp-test.txt multinode-378707:/home/docker/cp-test_multinode-378707-m02_multinode-378707.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707 "sudo cat /home/docker/cp-test_multinode-378707-m02_multinode-378707.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 cp multinode-378707-m02:/home/docker/cp-test.txt multinode-378707-m03:/home/docker/cp-test_multinode-378707-m02_multinode-378707-m03.txt
E0911 11:21:22.842555 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707-m03 "sudo cat /home/docker/cp-test_multinode-378707-m02_multinode-378707-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 cp testdata/cp-test.txt multinode-378707-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 cp multinode-378707-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile813539875/001/cp-test_multinode-378707-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 cp multinode-378707-m03:/home/docker/cp-test.txt multinode-378707:/home/docker/cp-test_multinode-378707-m03_multinode-378707.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707 "sudo cat /home/docker/cp-test_multinode-378707-m03_multinode-378707.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 cp multinode-378707-m03:/home/docker/cp-test.txt multinode-378707-m02:/home/docker/cp-test_multinode-378707-m03_multinode-378707-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 ssh -n multinode-378707-m02 "sudo cat /home/docker/cp-test_multinode-378707-m03_multinode-378707-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.57s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:210: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 node stop m03
multinode_test.go:210: (dbg) Done: out/minikube-linux-amd64 -p multinode-378707 node stop m03: (2.084346307s)
multinode_test.go:216: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 status
multinode_test.go:216: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-378707 status: exit status 7 (452.649366ms)

                                                
                                                
-- stdout --
	multinode-378707
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-378707-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-378707-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:223: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 status --alsologtostderr
multinode_test.go:223: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-378707 status --alsologtostderr: exit status 7 (455.304208ms)

                                                
                                                
-- stdout --
	multinode-378707
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-378707-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-378707-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:21:28.357834 2237629 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:21:28.357966 2237629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:21:28.357975 2237629 out.go:309] Setting ErrFile to fd 2...
	I0911 11:21:28.357979 2237629 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:21:28.358173 2237629 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:21:28.358352 2237629 out.go:303] Setting JSON to false
	I0911 11:21:28.358388 2237629 mustload.go:65] Loading cluster: multinode-378707
	I0911 11:21:28.358459 2237629 notify.go:220] Checking for updates...
	I0911 11:21:28.358779 2237629 config.go:182] Loaded profile config "multinode-378707": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:21:28.358793 2237629 status.go:255] checking status of multinode-378707 ...
	I0911 11:21:28.359171 2237629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:21:28.359234 2237629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:21:28.379222 2237629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42337
	I0911 11:21:28.379679 2237629 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:21:28.380352 2237629 main.go:141] libmachine: Using API Version  1
	I0911 11:21:28.380376 2237629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:21:28.380797 2237629 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:21:28.381037 2237629 main.go:141] libmachine: (multinode-378707) Calling .GetState
	I0911 11:21:28.382638 2237629 status.go:330] multinode-378707 host status = "Running" (err=<nil>)
	I0911 11:21:28.382655 2237629 host.go:66] Checking if "multinode-378707" exists ...
	I0911 11:21:28.383085 2237629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:21:28.383142 2237629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:21:28.400377 2237629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33109
	I0911 11:21:28.400785 2237629 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:21:28.401380 2237629 main.go:141] libmachine: Using API Version  1
	I0911 11:21:28.401403 2237629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:21:28.401768 2237629 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:21:28.401939 2237629 main.go:141] libmachine: (multinode-378707) Calling .GetIP
	I0911 11:21:28.404956 2237629 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:21:28.405367 2237629 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:21:28.405402 2237629 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:21:28.405526 2237629 host.go:66] Checking if "multinode-378707" exists ...
	I0911 11:21:28.405877 2237629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:21:28.405920 2237629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:21:28.421275 2237629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34519
	I0911 11:21:28.421761 2237629 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:21:28.422310 2237629 main.go:141] libmachine: Using API Version  1
	I0911 11:21:28.422331 2237629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:21:28.422630 2237629 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:21:28.422828 2237629 main.go:141] libmachine: (multinode-378707) Calling .DriverName
	I0911 11:21:28.423082 2237629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:21:28.423117 2237629 main.go:141] libmachine: (multinode-378707) Calling .GetSSHHostname
	I0911 11:21:28.425869 2237629 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:21:28.426327 2237629 main.go:141] libmachine: (multinode-378707) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:31:1a", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:18:52 +0000 UTC Type:0 Mac:52:54:00:57:31:1a Iaid: IPaddr:192.168.39.237 Prefix:24 Hostname:multinode-378707 Clientid:01:52:54:00:57:31:1a}
	I0911 11:21:28.426355 2237629 main.go:141] libmachine: (multinode-378707) DBG | domain multinode-378707 has defined IP address 192.168.39.237 and MAC address 52:54:00:57:31:1a in network mk-multinode-378707
	I0911 11:21:28.426514 2237629 main.go:141] libmachine: (multinode-378707) Calling .GetSSHPort
	I0911 11:21:28.426722 2237629 main.go:141] libmachine: (multinode-378707) Calling .GetSSHKeyPath
	I0911 11:21:28.426896 2237629 main.go:141] libmachine: (multinode-378707) Calling .GetSSHUsername
	I0911 11:21:28.427029 2237629 sshutil.go:53] new ssh client: &{IP:192.168.39.237 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707/id_rsa Username:docker}
	I0911 11:21:28.525488 2237629 ssh_runner.go:195] Run: systemctl --version
	I0911 11:21:28.531481 2237629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:21:28.546745 2237629 kubeconfig.go:92] found "multinode-378707" server: "https://192.168.39.237:8443"
	I0911 11:21:28.546783 2237629 api_server.go:166] Checking apiserver status ...
	I0911 11:21:28.546821 2237629 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0911 11:21:28.563256 2237629 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1108/cgroup
	I0911 11:21:28.573616 2237629 api_server.go:182] apiserver freezer: "10:freezer:/kubepods/burstable/pod4ac3958118ce3f6e7dda52fe654787ec/crio-b09f331133be7dc95bba693aa119ec0fa378d2d29ddefd6988ea9aaa2df0d438"
	I0911 11:21:28.573688 2237629 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/kubepods/burstable/pod4ac3958118ce3f6e7dda52fe654787ec/crio-b09f331133be7dc95bba693aa119ec0fa378d2d29ddefd6988ea9aaa2df0d438/freezer.state
	I0911 11:21:28.584473 2237629 api_server.go:204] freezer state: "THAWED"
	I0911 11:21:28.584514 2237629 api_server.go:253] Checking apiserver healthz at https://192.168.39.237:8443/healthz ...
	I0911 11:21:28.590307 2237629 api_server.go:279] https://192.168.39.237:8443/healthz returned 200:
	ok
	I0911 11:21:28.590345 2237629 status.go:421] multinode-378707 apiserver status = Running (err=<nil>)
	I0911 11:21:28.590357 2237629 status.go:257] multinode-378707 status: &{Name:multinode-378707 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0911 11:21:28.590389 2237629 status.go:255] checking status of multinode-378707-m02 ...
	I0911 11:21:28.590775 2237629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:21:28.590808 2237629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:21:28.606043 2237629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36293
	I0911 11:21:28.606515 2237629 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:21:28.607033 2237629 main.go:141] libmachine: Using API Version  1
	I0911 11:21:28.607060 2237629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:21:28.607407 2237629 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:21:28.607592 2237629 main.go:141] libmachine: (multinode-378707-m02) Calling .GetState
	I0911 11:21:28.609043 2237629 status.go:330] multinode-378707-m02 host status = "Running" (err=<nil>)
	I0911 11:21:28.609070 2237629 host.go:66] Checking if "multinode-378707-m02" exists ...
	I0911 11:21:28.609369 2237629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:21:28.609407 2237629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:21:28.624514 2237629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32987
	I0911 11:21:28.624972 2237629 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:21:28.625473 2237629 main.go:141] libmachine: Using API Version  1
	I0911 11:21:28.625498 2237629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:21:28.625808 2237629 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:21:28.626006 2237629 main.go:141] libmachine: (multinode-378707-m02) Calling .GetIP
	I0911 11:21:28.628859 2237629 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:21:28.629324 2237629 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:21:28.629361 2237629 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:21:28.629510 2237629 host.go:66] Checking if "multinode-378707-m02" exists ...
	I0911 11:21:28.629815 2237629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:21:28.629859 2237629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:21:28.644763 2237629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35547
	I0911 11:21:28.645254 2237629 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:21:28.645771 2237629 main.go:141] libmachine: Using API Version  1
	I0911 11:21:28.645796 2237629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:21:28.646101 2237629 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:21:28.646297 2237629 main.go:141] libmachine: (multinode-378707-m02) Calling .DriverName
	I0911 11:21:28.646548 2237629 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0911 11:21:28.646575 2237629 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHHostname
	I0911 11:21:28.649625 2237629 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:21:28.650110 2237629 main.go:141] libmachine: (multinode-378707-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:f1:8c:f4", ip: ""} in network mk-multinode-378707: {Iface:virbr1 ExpiryTime:2023-09-11 12:20:01 +0000 UTC Type:0 Mac:52:54:00:f1:8c:f4 Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:multinode-378707-m02 Clientid:01:52:54:00:f1:8c:f4}
	I0911 11:21:28.650147 2237629 main.go:141] libmachine: (multinode-378707-m02) DBG | domain multinode-378707-m02 has defined IP address 192.168.39.220 and MAC address 52:54:00:f1:8c:f4 in network mk-multinode-378707
	I0911 11:21:28.650258 2237629 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHPort
	I0911 11:21:28.650435 2237629 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHKeyPath
	I0911 11:21:28.650595 2237629 main.go:141] libmachine: (multinode-378707-m02) Calling .GetSSHUsername
	I0911 11:21:28.650755 2237629 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/17223-2215273/.minikube/machines/multinode-378707-m02/id_rsa Username:docker}
	I0911 11:21:28.736865 2237629 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0911 11:21:28.749785 2237629 status.go:257] multinode-378707-m02 status: &{Name:multinode-378707-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0911 11:21:28.749825 2237629 status.go:255] checking status of multinode-378707-m03 ...
	I0911 11:21:28.750163 2237629 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0911 11:21:28.750201 2237629 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0911 11:21:28.766194 2237629 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38251
	I0911 11:21:28.766622 2237629 main.go:141] libmachine: () Calling .GetVersion
	I0911 11:21:28.767091 2237629 main.go:141] libmachine: Using API Version  1
	I0911 11:21:28.767116 2237629 main.go:141] libmachine: () Calling .SetConfigRaw
	I0911 11:21:28.767471 2237629 main.go:141] libmachine: () Calling .GetMachineName
	I0911 11:21:28.767678 2237629 main.go:141] libmachine: (multinode-378707-m03) Calling .GetState
	I0911 11:21:28.769028 2237629 status.go:330] multinode-378707-m03 host status = "Stopped" (err=<nil>)
	I0911 11:21:28.769044 2237629 status.go:343] host is not running, skipping remaining checks
	I0911 11:21:28.769052 2237629 status.go:257] multinode-378707-m03 status: &{Name:multinode-378707-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.99s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (32.31s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 node start m03 --alsologtostderr
E0911 11:21:50.530327 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
multinode_test.go:254: (dbg) Done: out/minikube-linux-amd64 -p multinode-378707 node start m03 --alsologtostderr: (31.640032886s)
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 status
multinode_test.go:275: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (32.31s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (1.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:394: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 node delete m03
multinode_test.go:394: (dbg) Done: out/minikube-linux-amd64 -p multinode-378707 node delete m03: (1.259826742s)
multinode_test.go:400: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 status --alsologtostderr
multinode_test.go:424: (dbg) Run:  kubectl get nodes
multinode_test.go:432: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (1.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (533.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:354: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-378707 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0911 11:36:22.843622 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:38:47.568988 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:39:15.053316 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:41:22.842424 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 11:42:18.105006 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:43:47.569011 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:44:15.053710 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
multinode_test.go:354: (dbg) Done: out/minikube-linux-amd64 start -p multinode-378707 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (8m52.897835452s)
multinode_test.go:360: (dbg) Run:  out/minikube-linux-amd64 -p multinode-378707 status --alsologtostderr
multinode_test.go:374: (dbg) Run:  kubectl get nodes
multinode_test.go:382: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (533.47s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (51.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:443: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-378707
multinode_test.go:452: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-378707-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:452: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-378707-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (67.625163ms)

                                                
                                                
-- stdout --
	* [multinode-378707-m02] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-378707-m02' is duplicated with machine name 'multinode-378707-m02' in profile 'multinode-378707'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:460: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-378707-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:460: (dbg) Done: out/minikube-linux-amd64 start -p multinode-378707-m03 --driver=kvm2  --container-runtime=crio: (50.386935211s)
multinode_test.go:467: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-378707
multinode_test.go:467: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-378707: exit status 80 (248.525774ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-378707
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-378707-m03 already exists in multinode-378707-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-378707-m03
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-378707-m03: (1.083808631s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (51.83s)

                                                
                                    
x
+
TestScheduledStopUnix (125.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-495137 --memory=2048 --driver=kvm2  --container-runtime=crio
E0911 11:49:15.053633 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:49:25.891603 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-495137 --memory=2048 --driver=kvm2  --container-runtime=crio: (53.422207849s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-495137 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-495137 -n scheduled-stop-495137
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-495137 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-495137 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-495137 -n scheduled-stop-495137
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-495137
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-495137 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-495137
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-495137: exit status 7 (66.763287ms)

                                                
                                                
-- stdout --
	scheduled-stop-495137
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-495137 -n scheduled-stop-495137
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-495137 -n scheduled-stop-495137: exit status 7 (68.98753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-495137" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-495137
--- PASS: TestScheduledStopUnix (125.14s)

                                                
                                    
x
+
TestKubernetesUpgrade (158.63s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:235: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-604202 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:235: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-604202 --memory=2200 --kubernetes-version=v1.16.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m10.454429154s)
version_upgrade_test.go:240: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-604202
version_upgrade_test.go:240: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-604202: (2.633638919s)
version_upgrade_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-604202 status --format={{.Host}}
version_upgrade_test.go:245: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-604202 status --format={{.Host}}: exit status 7 (78.814923ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:247: status error: exit status 7 (may be ok)
version_upgrade_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-604202 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-604202 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (52.29535829s)
version_upgrade_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-604202 version --output=json
version_upgrade_test.go:280: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:282: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-604202 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:282: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-604202 --memory=2200 --kubernetes-version=v1.16.0 --driver=kvm2  --container-runtime=crio: exit status 106 (112.98182ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-604202] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.28.1 cluster to v1.16.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.16.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-604202
	    minikube start -p kubernetes-upgrade-604202 --kubernetes-version=v1.16.0
	    
	    2) Create a second cluster with Kubernetes 1.16.0, by running:
	    
	    minikube start -p kubernetes-upgrade-6042022 --kubernetes-version=v1.16.0
	    
	    3) Use the existing cluster at version Kubernetes 1.28.1, by running:
	    
	    minikube start -p kubernetes-upgrade-604202 --kubernetes-version=v1.28.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:286: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:288: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-604202 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:288: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-604202 --memory=2200 --kubernetes-version=v1.28.1 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (31.775028795s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-604202" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-604202
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-604202: (1.178204164s)
--- PASS: TestKubernetesUpgrade (158.63s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.35s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.35s)

                                                
                                    
x
+
TestPause/serial/Start (105.88s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-474712 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-474712 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m45.877633176s)
--- PASS: TestPause/serial/Start (105.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-690677 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-690677 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (76.182868ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-690677] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (63.55s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-690677 --driver=kvm2  --container-runtime=crio
E0911 11:53:47.568974 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-690677 --driver=kvm2  --container-runtime=crio: (1m3.291874028s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-690677 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (63.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.64s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-640433 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-640433 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (129.504195ms)

                                                
                                                
-- stdout --
	* [false-640433] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=17223
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0911 11:53:57.073122 2248213 out.go:296] Setting OutFile to fd 1 ...
	I0911 11:53:57.073330 2248213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:53:57.073341 2248213 out.go:309] Setting ErrFile to fd 2...
	I0911 11:53:57.073348 2248213 out.go:343] TERM=,COLORTERM=, which probably does not support color
	I0911 11:53:57.073685 2248213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/17223-2215273/.minikube/bin
	I0911 11:53:57.074477 2248213 out.go:303] Setting JSON to false
	I0911 11:53:57.075793 2248213 start.go:128] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":236188,"bootTime":1694197049,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1041-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0911 11:53:57.075882 2248213 start.go:138] virtualization: kvm guest
	I0911 11:53:57.080193 2248213 out.go:177] * [false-640433] minikube v1.31.2 on Ubuntu 20.04 (kvm/amd64)
	I0911 11:53:57.081852 2248213 notify.go:220] Checking for updates...
	I0911 11:53:57.083483 2248213 out.go:177]   - MINIKUBE_LOCATION=17223
	I0911 11:53:57.085160 2248213 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0911 11:53:57.086894 2248213 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/17223-2215273/kubeconfig
	I0911 11:53:57.089980 2248213 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/17223-2215273/.minikube
	I0911 11:53:57.092158 2248213 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0911 11:53:57.093767 2248213 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0911 11:53:57.095927 2248213 config.go:182] Loaded profile config "NoKubernetes-690677": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:53:57.096083 2248213 config.go:182] Loaded profile config "pause-474712": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.1
	I0911 11:53:57.096191 2248213 config.go:182] Loaded profile config "stopped-upgrade-715426": Driver=, ContainerRuntime=crio, KubernetesVersion=v1.17.0
	I0911 11:53:57.096327 2248213 driver.go:373] Setting default libvirt URI to qemu:///system
	I0911 11:53:57.138743 2248213 out.go:177] * Using the kvm2 driver based on user configuration
	I0911 11:53:57.140439 2248213 start.go:298] selected driver: kvm2
	I0911 11:53:57.140472 2248213 start.go:902] validating driver "kvm2" against <nil>
	I0911 11:53:57.140488 2248213 start.go:913] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0911 11:53:57.143048 2248213 out.go:177] 
	W0911 11:53:57.144857 2248213 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0911 11:53:57.146419 2248213 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-640433 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-640433

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-640433

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-640433

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-640433

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-640433

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-640433

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-640433

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-640433

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-640433

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-640433

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-640433

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-640433" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-640433" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-640433

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-640433"

                                                
                                                
----------------------- debugLogs end: false-640433 [took: 3.255821793s] --------------------------------
helpers_test.go:175: Cleaning up "false-640433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-640433
--- PASS: TestNetworkPlugins/group/false (3.64s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (15.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-690677 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-690677 --no-kubernetes --driver=kvm2  --container-runtime=crio: (14.099877206s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-690677 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-690677 status -o json: exit status 2 (244.25309ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-690677","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-690677
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-690677: (1.129068763s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (15.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (29.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-690677 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-690677 --no-kubernetes --driver=kvm2  --container-runtime=crio: (29.192610765s)
--- PASS: TestNoKubernetes/serial/Start (29.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-690677 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-690677 "sudo systemctl is-active --quiet service kubelet": exit status 1 (224.171421ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-690677
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-690677: (1.275962484s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (68.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-690677 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-690677 --driver=kvm2  --container-runtime=crio: (1m8.944065241s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (68.94s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:219: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-715426
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.46s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-690677 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-690677 "sudo systemctl is-active --quiet service kubelet": exit status 1 (213.212197ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (212.98s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-642215 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-642215 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (3m32.982187391s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (212.98s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (143.77s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-352076 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-352076 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (2m23.765392479s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (143.77s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (96.79s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-235462 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
E0911 11:58:47.569037 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 11:58:58.105651 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 11:59:15.053450 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-235462 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (1m36.78898899s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (96.79s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-352076 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [eb4971d9-a8b6-48b7-9d6c-1b47f64afce7] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [eb4971d9-a8b6-48b7-9d6c-1b47f64afce7] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.031493066s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-352076 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.46s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-235462 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5dbb1d01-9b7e-4820-9325-94dcb5f47b20] Pending
helpers_test.go:344: "busybox" [5dbb1d01-9b7e-4820-9325-94dcb5f47b20] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5dbb1d01-9b7e-4820-9325-94dcb5f47b20] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.036701194s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-235462 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.46s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-352076 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-352076 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.187646102s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-352076 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.28s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-235462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-235462 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.158734954s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-235462 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.25s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-642215 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7495b31e-0dad-4554-82d1-2aad824ed73d] Pending
helpers_test.go:344: "busybox" [7495b31e-0dad-4554-82d1-2aad824ed73d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [7495b31e-0dad-4554-82d1-2aad824ed73d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 8.042594461s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-642215 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (8.46s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-642215 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-642215 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.93s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-484027 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
E0911 12:01:22.841778 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-484027 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (1m0.018332542s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (60.02s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.5s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-484027 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [1a6325f3-c610-437f-a81f-36da95fc4ebf] Pending
helpers_test.go:344: "busybox" [1a6325f3-c610-437f-a81f-36da95fc4ebf] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [1a6325f3-c610-437f-a81f-36da95fc4ebf] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.038292905s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-484027 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.50s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-484027 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-484027 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.047304434s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-484027 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (730.09s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-352076 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-352076 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (12m9.800864797s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-352076 -n no-preload-352076
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (730.09s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (623.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-235462 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-235462 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (10m23.273360286s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-235462 -n embed-certs-235462
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (623.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (347.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-642215 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0
E0911 12:03:30.620377 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 12:03:47.568977 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
E0911 12:04:15.053515 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-642215 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.16.0: (5m47.559114719s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-642215 -n old-k8s-version-642215
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (347.86s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (494.02s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-484027 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
E0911 12:06:05.892299 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
E0911 12:06:22.841795 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/ingress-addon-legacy-508741/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-484027 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (8m13.741022927s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-484027 -n default-k8s-diff-port-484027
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (494.02s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (62.65s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-867563 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-867563 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (1m2.6493965s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (62.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (71.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m11.302559239s)
--- PASS: TestNetworkPlugins/group/auto/Start (71.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.75s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-867563 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-867563 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.754608189s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.75s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (11.12s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-867563 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-867563 --alsologtostderr -v=3: (11.122442776s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (11.12s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-867563 -n newest-cni-867563
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-867563 -n newest-cni-867563: exit status 7 (66.193041ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-867563 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (59.55s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-867563 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-867563 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.1: (59.065576562s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-867563 -n newest-cni-867563
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (59.55s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (79.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m19.215420635s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (79.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-640433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-640433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zx7x4" [6346966d-2ca8-4032-a3b6-b6d44bb00e7d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0911 12:28:47.569847 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/functional-312672/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-zx7x4" [6346966d-2ca8-4032-a3b6-b6d44bb00e7d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.015396756s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-640433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.18s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 ssh -p newest-cni-867563 "sudo crictl images -o json"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230511-dc714da8
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (4.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-867563 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Done: out/minikube-linux-amd64 pause -p newest-cni-867563 --alsologtostderr -v=1: (1.682252691s)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-867563 -n newest-cni-867563
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-867563 -n newest-cni-867563: exit status 2 (339.462456ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-867563 -n newest-cni-867563
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-867563 -n newest-cni-867563: exit status 2 (289.699986ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-867563 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-867563 -n newest-cni-867563
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-867563 -n newest-cni-867563
--- PASS: TestStartStop/group/newest-cni/serial/Pause (4.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (100.56s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m40.559571023s)
--- PASS: TestNetworkPlugins/group/calico/Start (100.56s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (114.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
E0911 12:29:15.053176 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m54.878982438s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (114.88s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-sfvkc" [121c8ad1-110e-4efe-ab5c-c37c2eae4d07] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.030383947s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-640433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-640433 replace --force -f testdata/netcat-deployment.yaml
E0911 12:29:50.243753 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
E0911 12:29:50.249140 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
E0911 12:29:50.259394 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
E0911 12:29:50.279669 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
E0911 12:29:50.319992 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
E0911 12:29:50.401104 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
E0911 12:29:50.561308 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vfxt5" [150d30dd-0e24-42e5-a269-cd6beb06951f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0911 12:29:50.882059 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
E0911 12:29:51.522886 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
E0911 12:29:52.803737 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
E0911 12:29:55.364931 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-vfxt5" [150d30dd-0e24-42e5-a269-cd6beb06951f] Running
E0911 12:30:00.486069 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.013634878s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-640433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (102.76s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
E0911 12:30:31.207548 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
E0911 12:30:32.377132 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m42.760524412s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (102.76s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-l29h2" [beebf21e-8516-46f4-a33f-7776aab40984] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 5.030143722s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-640433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-640433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-zz88v" [7a28ce11-ba4e-4b39-b452-ade8a37f8843] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0911 12:30:52.857859 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-zz88v" [7a28ce11-ba4e-4b39-b452-ade8a37f8843] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.01502375s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-640433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-640433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (14.4s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-640433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-vr9ns" [9d4bc356-b735-4f51-88b6-88e506b47cdf] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0911 12:31:12.168382 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/no-preload-352076/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-vr9ns" [9d4bc356-b735-4f51-88b6-88e506b47cdf] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 14.01767861s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (14.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (83.04s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m23.036218271s)
--- PASS: TestNetworkPlugins/group/flannel/Start (83.04s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-640433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (122.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
E0911 12:31:33.818717 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-640433 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (2m2.502785125s)
--- PASS: TestNetworkPlugins/group/bridge/Start (122.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-640433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-640433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-l682g" [dd43d9f5-2591-447f-9061-58126fb7c498] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-l682g" [dd43d9f5-2591-447f-9061-58126fb7c498] Running
E0911 12:32:15.899771 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.crt: no such file or directory
E0911 12:32:15.905132 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.crt: no such file or directory
E0911 12:32:15.915514 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.crt: no such file or directory
E0911 12:32:15.936302 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.crt: no such file or directory
E0911 12:32:15.976652 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.crt: no such file or directory
E0911 12:32:16.057110 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.crt: no such file or directory
E0911 12:32:16.217615 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.crt: no such file or directory
E0911 12:32:16.538565 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.crt: no such file or directory
E0911 12:32:17.179307 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.crt: no such file or directory
E0911 12:32:18.107204 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/addons-554886/client.crt: no such file or directory
E0911 12:32:18.459825 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.014519788s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-640433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-gmqcs" [35e408d0-9913-4255-8433-807dbf6aa2fd] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 5.023696303s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (5.03s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-640433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-640433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-z282d" [692e582c-aa06-48c7-875b-ebfe09319185] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-z282d" [692e582c-aa06-48c7-875b-ebfe09319185] Running
E0911 12:32:55.739565 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/old-k8s-version-642215/client.crt: no such file or directory
E0911 12:32:56.863192 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.020727531s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-640433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-640433 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-640433 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-4snpw" [7974232c-80a9-4f41-901b-2cef6e73152a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-4snpw" [7974232c-80a9-4f41-901b-2cef6e73152a] Running
E0911 12:33:37.823975 2222471 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/17223-2215273/.minikube/profiles/default-k8s-diff-port-484027/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.011302542s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-640433 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-640433 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (36/288)

Order skiped test Duration
5 TestDownloadOnly/v1.16.0/cached-images 0
6 TestDownloadOnly/v1.16.0/binaries 0
7 TestDownloadOnly/v1.16.0/kubectl 0
12 TestDownloadOnly/v1.28.1/cached-images 0
13 TestDownloadOnly/v1.28.1/binaries 0
14 TestDownloadOnly/v1.28.1/kubectl 0
18 TestDownloadOnlyKic 0
29 TestAddons/parallel/Olm 0
39 TestDockerFlags 0
42 TestDockerEnvContainerd 0
44 TestHyperKitDriverInstallOrUpdate 0
45 TestHyperkitDriverSkipUpgrade 0
96 TestFunctional/parallel/DockerEnv 0
97 TestFunctional/parallel/PodmanEnv 0
110 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
111 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
112 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
113 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
114 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
115 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
116 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
117 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
145 TestGvisorAddon 0
146 TestImageBuild 0
179 TestKicCustomNetwork 0
180 TestKicExistingNetwork 0
181 TestKicCustomSubnet 0
182 TestKicStaticIP 0
213 TestChangeNoneUser 0
216 TestScheduledStopWindows 0
218 TestSkaffold 0
220 TestInsufficientStorage 0
224 TestMissingContainerUpgrade 0
232 TestStartStop/group/disable-driver-mounts 0.15
241 TestNetworkPlugins/group/kubenet 5.38
249 TestNetworkPlugins/group/cilium 4.05
x
+
TestDownloadOnly/v1.16.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.16.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.16.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.16.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.16.0/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.16.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/cached-images
aaa_download_only_test.go:117: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/binaries
aaa_download_only_test.go:136: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.1/kubectl
aaa_download_only_test.go:152: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.1/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:210: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:474: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:297: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-226537" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-226537
--- SKIP: TestStartStop/group/disable-driver-mounts (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:522: 
----------------------- debugLogs start: kubenet-640433 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-640433

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-640433

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-640433

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-640433

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-640433

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-640433

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-640433

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-640433

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-640433

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-640433

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-640433

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-640433" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-640433" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-640433

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-640433"

                                                
                                                
----------------------- debugLogs end: kubenet-640433 [took: 5.231545508s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-640433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-640433
--- SKIP: TestNetworkPlugins/group/kubenet (5.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:522: 
----------------------- debugLogs start: cilium-640433 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-640433" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-640433

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-640433" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-640433"

                                                
                                                
----------------------- debugLogs end: cilium-640433 [took: 3.871849366s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-640433" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-640433
--- SKIP: TestNetworkPlugins/group/cilium (4.05s)

                                                
                                    
Copied to clipboard